Back to Sam Tomlinson
🎯
Sam Tomlinson

PPC & Google Ads

Issue #136 | The Ultimate Meta Ads Account Audit, Part II

Sam Tomlinson <sam@samtomlinson.me>
October 5, 2025

(

link

)

***********************

Happy Sunday, Everyone!

***********************

I hope you’re all enjoying playoff baseball and the first weekend

of Q4! I just returned from Groceryshop (an absolutely wonderful

show) and am off the road for a few weeks – then it’s back to

Vegas, California & Boston for a series of events focused on

Senior Living, Legal & Tech.

We’re back with the second half of the Meta Ads Audit Guide. Last

week’s issue (

link

) dove deep into the side of Meta most marketers skip: the

foundation. We talked about understanding the business

model/goals/objectives, researching your target audience, mapping

the competitive landscape and fixing your data infrastructure.

That split was intentional - because 80%+ of the results you see

in Ads Manager are, directly or indirectly, influenced by the

stuff outside the ad account.

For today’s issue, we’re going to open the hood on the account

itself.

I’ve completed well over 100 Meta Ads Audits over the past few

years. And every time I’m engaged to start a new one, the

person/brand commissioning it asks a form of the same question:

what do you look for? What separates the remarkable,

high-performing accounts from the ones that consistently fall

short?

In my experience, it comes down to 4 things, each executed

uncommonly well:

* An account architecture that aligns with the two things that

matter most to the business: people (customers) and profit (how

the business makes + retains money)

* Full-journey creative alignment - ads aligned to the audience;

post-click experiences aligned to the ads; product/service

experiences that keep the promises made from the outset of the

relationship

* A legitimate, well-informed testing strategy that balances 10%

(incremental) gains with 10x (revolutionary) swings

* An anti-fragile, future-proofing approach that maximizes the

probability that the account will be able to thrive (not just

survive) future disruptions and changes

This issue is a deep dive into those four levers. I’ll share the

specific diagnostics tests and metrics I use to assess each one,

along with the less obvious traps like attribution distortions,

budgetary blind spots, creative stagnation and post-click

experience issues that, left unchecked, will materially degrade

performance.

Let’s get to it.

------------------------

The Ad Account Structure

------------------------

Last week’s issue was squarely focused on building a strong

foundation under the account; this week’s begins with ensuring

the structure itself isn’t warped.

The reality is that - for most media buyers - account structure

is treated as busywork. It’s a thing that has to be done, but not

something that should be done with obsessive care or careful

thought. To be blunt: that’s a mistake.

Structure is a value statement. It is how you (the media buyer)

communicate your goals and priorities to the machine (Meta). If

your structure is completely flat OR hyper-fragmented, you’re

saying everything is equally important, which means nothing is

important.

A poor structure will destroy signal quality, which, in turn,

will evaporate profitability. Meta’s machine learning thrives

when it has clean, consistent feedback loops. Most accounts I

audit make that harder than it needs to be.

In my experience, there are four major “structure” red flags:

* Over-fragmentation: dozens of campaigns or ad sets, most (or

all) starved of sufficient budget to exit the learning phase.

* Structural drift (or a shanty-town structure): a slew of legacy

campaigns with mis-aligned goals, outdated audiences/creative,

incorrect exclusions or no-longer-relevant targets still

hoovering up spend because they’ve “worked in the past.”

* Conflated Goals: ad sets with different hero products/services,

audience targets + different optimization actions, all jammed

into the same overarching campaign (with either CBO or ABO). I

have yet to see this actually perform…but it happens in >30% of

accounts.

* Broad Bro: this one is particularly pervasive in B2B SaaS + B2C

lead gen - campaign structures that default to “broad” -

resulting in a significant amount of spend dedicated to people

who are obviously DQ’d for the business.

A healthy account structure tends to look deceptively simple:

campaigns focused around a single offer/angle, with an

optimization action + attribution window aligned to the business

objectives. Each campaign has 1-3 well-designed prospecting ad

sets with tailored creative/messaging plus a smart retargeting ad

set. The very best have a dedicated testing structure (either a

testing campaign, or a method for integrating test concepts into

the existing structure), plus exclusions that keep spend flowing

toward incremental opportunity rather than back to existing

buyers.

It’s not rocket science. It’s the basics executed with uncommon

brilliance.

When I audit structure, I’m not just counting campaigns. I’m

asking:

Is the account built around how this business actually generates

profit?

For a multi-SKU ecommerce brand, that may mean segmenting by hero

product line and evergreen bundles rather than by creative type.

For a lead-gen service business, it may mean campaigns tied to

service offering, geo or service tier. Structure is an

operational map: it should mirror how profit (or contribution

margin) is actually created.

Is budget flowing to the right places?

Meta - done well - is both a demand creation AND demand capture

machine. But - left to its own devices - the platform will

default to the path of least resistance. For some accounts, that

means over-indexing toward remarketing (demand capture) at the

expense of prospecting, because that’s the easiest way make the

ROAS number look good; for others, it’ll drop 95%+ of spend to

net-new audiences, with virtually no follow up (because that

makes the rCPM sparkle). Neither is optimal. A high-performing

account tends to have a healthy balance between prospecting /

demand creation (~80%) and demand capture (~20%). Assess this by

pulling a 30 day spend report by audience – if you see that 50%+

of your budget is going to your WCA and existing customers,

you’re likely way too heavy on remarketing.

A second, quick diagnostic that catches most of the big mistakes:

if the top three campaigns don’t account for at least 60% of

spend, or if more than a handful of campaigns each produce fewer

than 25 optimization events (“conversions”) in 30 days,

fragmentation is likely kneecapping performance.

The other question is whether the structure facilitates scale.

Meta’s algorithm needs 25+ optimization actions per ad set per

week to exit learning and stabilize delivery (their official

documentation says 50, but if I have plenty of ad sets exit

learning at 15-25). If you can’t reliably get to ~2 optimization

actions per day, per ad set, then something must change. The

simplest resolution is consolidating budgets into fewer,

better-defined ad sets - which often lowers CPA 10–20% without

touching creative or targets.

-------------------------

Budgets: Follow The Money

-------------------------

A budget analysis often reveals more about what’s holding an

account back than any random setting or hidden report. Many

brands assume their budget distribution is rational because

they’ve been “optimizing” over time. In practice, spend often

clings to legacy campaigns that once performed but no longer

contribute incremental growth.

To diagnose this, compare three things: spend by campaign,

new-customer revenue (or NC-ROAS) by campaign, and marketing

efficiency ratio (MER) at each level of scale. Any ad set that

consumes >10% of budget but contributes A wonderful side effect

of this exercise is that it identifies high-efficiency campaigns

artificially capped by budget. Shifting even $10,000 a month from

an inefficient campaign to a hyper-efficient-at-low-scale

campaign will improve MER more than just about anything else you

could do.

The real goal here is to understand the marginal return curve: if

I add $1 to this campaign, how much incremental new-customer

revenue do I get? If the curve is flat or declining, that’s your

cue to shift dollars elsewhere. Where those dollars should go

depends on the business or account – it might be to a more

efficient campaign; it might be to a different geo or service

line; it might be to a different platform (like Google or

Pinterest or YouTube).

Just because the dollar is being spent on Meta today does not

mean it should be tomorrow.

Daily Budgets Are A Silent Account Killer

-----------------------------------------

One of the most common (and least discussed) drags on Meta

performance is the over-use of rigid daily budgets. I see it all

the time - budgets set based on neat spreadsheet rows (whatever

the CFO allocated, divided by 30.4) - rather than informed by the

market dynamics. On paper, setting daily caps like this seems

smart and responsible - budgets are essentially guardrails that

promise tighter control, more predictable capital deployment and

(unless you’re bad at math) eliminate the possibility of blowing

the budget.

In practice, they often do the opposite. The reality is that

neither your audience nor Meta behave in a uniform manner. A

massive segment of your target audience might be keen to buy your

product on a weekend, but utterly exhausted and unwilling on a

Thursday night. When that happens, even Meta’s ability to exceed

the daily cap by up to 75% is insufficient – Meta might be able

to spend 10x your daily budget at your desired efficiency target

on Sunday, but not be able to deploy more than 25% of it on

Thursday.

When this situation arises (and it does far more than most media

buyers want to admit), the daily budget ceases to be a guardrail

and starts functioning like an inhibitor. Instead of your account

being able to spend $5,000 at a 5.0 ROAS (netting you $25k!), it

can only spend $500 at that 5.0 ROAS - meaning you miss out on

$18,000 in marginal revenue (less ad costs). In virtually every

case, your overall performance would be better if you dropped the

entire Wednesday + Thursday budget for the month on that single

Sunday.

A related issue: strict daily caps create delivery volatility.

It’s common to see a budget-capped campaign hit its limit

mid-afternoon, pausing delivery right as Meta has found a

converting pocket of traffic. The following morning, the pacing

model starts cautiously to avoid overspend. This start-stop

rhythm produces uneven impression distribution and inflated CPMs

- not because the audience changed or the creative is bad, but

because the budget guardrails throttled delivery at the wrong

time.

A better approach for most evergreen or high-volume campaigns is

to allocate sufficient budget such that either the target (Cost

Cap/Bid Cap) or the campaign budget (lifetime budget) is the

limiter - not the daily budget. Either change gives Meta’s pacing

algorithm room to smooth spend across the periods when demand and

opportunity are present, leaning in on high-conversion hours or

days and pulling back when traffic quality dips. The result is

more stable, predictable delivery, more consistent CPMs and a

sufficiently high optimization event volume to keep the ad set

out of the learning phase.

A simple way to test the impact:

* Identify one or two of your best-performing prospecting

campaigns that already meet your efficiency goals

* Shift them from daily to a lifetime or 7-day rolling budget

using the same total allocation

* Monitor delivery, CPM, CPA, and MER over a two-week period.

Most advertisers find that this single change reduces volatility

and improves cost-per-result - all without touching creative or

bids.

The takeaway: daily budgets often starve Meta of the efficiency,

signal density and pacing flexibility it needs to spend

optimally. Loosening those constraints is often one of the

lowest-effort, highest-impact steps you can take to stabilize

performance. You will need to actually look at your account when

you do this (and intervene sometimes!) - but the rewards

(improved efficiency + more stable performance) are often worth

the risk.

----------------------------------------

Incrementality: Don’t Trust Meta Blindly

----------------------------------------

One of the most counter-intuitive traps in Meta audits is letting

in-platform ROAS dictate all decisions. A campaign can look

spectacular in Ads Manager while doing very little for the actual

business.

I’ve seen this most often in two scenarios:

• When a large share of conversions are actually existing

prospects or customers making repeat purchases. Meta happily

claims the credit, but incremental revenue barely budges when the

campaign is paused – all that happened was conversions that would

have gone to email or direct get attributed to Meta.

• When accounts over-index on 1-day view attribution, which can

make a campaign look like a hero while adding little genuine

lift.

There’s also the inverse (which happens more than most

performance marketers or Meta Ads X Gurus want to admit): Meta

looks like garbage on a last-click attribution basis, but is

quietly driving top-of-funnel traffic that closes later through

branded search, affiliate, organic, or retail/in-person/on-call.

That’s why I always compare in-platform ROAS to MER (marketing

efficiency ratio) and NC-ROAS (new-customer ROAS). If platform

performance is climbing while MER and NC-ROAS remain flat (or

worse, decline) - you’re buying the same customers twice.

When possible, I look at 28-day click vs. 7-day click vs. 1-day

click vs. 1-day view data (Ads Manager → compare attribution

settings) to understand if/where Meta is having an impact. If a

significant portion of Meta’s claimed conversions are 1DV, that’s

a strong signal the true, incremental impact might be lower than

claimed. If you’re seeing a large chunk of optimization actions

in 1DC, you’re (more than likely) overindexing on remarketing.

My preference (in almost all cases) is to look at 7DC – that

tends to be a good balance of immediacy, impact (click-based

attribution actually forces Meta to send you traffic, not just

claim eyeballs) and true incrementality.

A second, and related, mistake: brands making decisions based on

a disconnected third-party attribution tool (like Triple Whale or

Northbeam). The results in a nefarious issue: disintermediating

the optimization actions (changing targets or budgets) from the

data in Meta’s view. Translation: Meta has no idea why you’re

doing what you’re doing. Think of it like a teacher (the TPA

platform) telling a parent (the media buyer) that their

son/daughter (Meta) was behaving badly in class - then the

parent, with no explanation, sends the child to bed without

supper when they arrive home. The child (Meta) has absolutely no

idea why this is happening - and is just as likely to act out in

the future as s/he is to figure out why this terrible thing

happened. The better solution is for the parent to communicate

the issue to the child, and provide the concrete details leading

to the consequences. Fortunately, Meta allows you to do this by

uploading TPA data via the conversion API (assuming you’re using

a compatible TPA tool). If you are using TPA, please ensure it is

integrated into Meta Ads, so you aren’t (inadvertently) sending

Meta to bed without supper.

--------------------------------

Creative: The Real Growth Engine

--------------------------------

If architecture is the skeleton, creative is the muscle - it’s

what actually moves the algorithm. No other factor has a larger

impact on sustained scaling than creative diversity, velocity and

alignment.

Meta’s single-greatest advantage is its ability to match the

right message to the right person at the right time. But if your

ads don’t provide it with compelling stories to work with (or

worse, if your ads promise one thing and your post-click

experience delivers another), even Meta’s world-class machine

learning can’t save you.

I think of creative performance in terms of three alignments that

must click like gears:

* Creative–Audience Alignment: Does the ad open with a hook that

actually matters to the audience seeing it? Can the ad earn the

attention of your desired audience with consistency and

regularity?

* Creative–Lander Alignment: Does the landing page reinforce the

exact promise or pain point the ad led with, above the fold and

without friction?

* Audience–Lander Alignment: Are we sending the right segment to

the right destination, or dumping everyone onto the same generic

page?

A classic failure pattern is a brilliant UGC video for a

limited-time bundle that drives high CTR, but the click leads to

a generic category page that doesn’t mention the bundle. CTR

looks great; CVR is terribad as users feel misled or don’t feel

like working for it; Meta optimizes in the wrong direction. When

these three alignments lock in, you often see CTR jump 30–50% and

CVR climb 20–40% with zero change to budget or bids.

Unlike structure and budget, where there are (pretty solid)

quantitative tests you can use, creative is a true fusion of art

and science. The solution is to evaluate it using a combination

of unbiased qualitative assessment and quantitative metrics:

For video, I start with thumb-stop rate - the percentage of

impressions that hold a viewer for at least three seconds. Under

25% is a red flag in most categories. I also look at hook-to-hold

rate: of those who watched three seconds, how many continued

watching for at 15 seconds? If that’s below 35–40%, there is

likely a disconnect in the “body” of the ad.

Across all formats, I monitor CTR-Link (prospecting should

generally clear 0.8–1.0% in ecommerce) and Cost per 1,000 new

accounts reached to catch saturation or weak hooks masked by

remarketing efficiency.

Finally, there’s LP CVR. If CTR is healthy but CVR dips below

1–2% for ecommerce or below 2-3% for lead-gen, that’s almost

always a lander misalignment: either the page is too slow (mobile

load >3 seconds), too confusing (sending a specific bundle

audience to a generic shop page), too cluttered or simply not

aligned to the ad (resulting in your audience feeling like it’s a

bait-and-switch).

A creative audit also means looking at diversity and velocity. A

good prospecting campaign typically needs at least 6-10 active

concepts (note: a “concept” is a unique creative - not a

different color font or a slightly different image) running at

any given time, preferably a mix of UGC, static, carousel, demos,

testimonials, and benefit-driven explainer formats. The majority

of these will fail, at which time, pause them out and introduce

new ones. Creative follows a power law (more on that here (

link

)) - which means your account must continually introduce new

creatives to find new winners.

My first quick check: If 70% of spend in the L90 days was

directed to fewer than five ads, that’s usually a sign that the

account/campaign is over-reliant on a handful of winners…and if

one of those stops performing, there’s a world of hurt on the

horizon.

The next check: ask for the creative tracker and implementation

plan. Is there a cadence for refreshing hooks and angles every

7–14 days, or is the account content to run the same “winner” for

months until it burns out? Are headlines and CTAs being tested

deliberately, or swapped haphazardly? Are creative concepts

tagged by theme so that we know whether “problem/solution” videos

outperform “testimonial” carousels, or are we just guessing?

Most accounts fail every one of those tests, which is why they

are asking for an audit in the first place.

-------------------------------------------------

Testing Discipline: Stop Guessing, Start Learning

-------------------------------------------------

The second most common creative mistake after stagnation is

chaotic testing.

A lot of brands think they’re testing because they launch 10–20

ad variations at once. In reality, they’re throwing a bunch of

coins in the air and calling whichever one lands heads a winner.

Statistically, every ad that gets early delivery is just as

likely to be a loser as a winner — you’re just observing noise

masquerading as signal. That’s actually not hyperbole. This

happens because most of those ads never reach the sample size

needed to prove anything. Meta’s algorithm is designed to favor

whichever ad racks up the first few conversions at an

acceptable-or-better efficiency, regardless of whether that

concept is actually superior and sustainable over the long-run.

That early bias creates a Type I error (a false positive, where a

weak ad looks strong); at the same time, other ads that might

have performed better if given fair delivery never get enough

impressions to prove themselves - a Type II error (a false

negative that leaves potential winners lying dormant, with no

spend). If you’re curious about the math, check out this video on

my YouTube channe (

link

)l where I break it down.

The result is a statistical illusion: a few random spikes

presented as insight, budgets flowing to the wrong ads and no

legitimate learning to show for any of it.

A disciplined testing framework feels slower at first because you

fund each variation long enough to gather meaningful evidence.

But that rigor makes scaling dramatically faster, because you’re

backing ads that are legitimately viable over a mid-to-long

horizon - not those that merely got lucky in the opening round.

The alternative is to balance quick hits with long-term bets -

what I’ve termed the 10% or 10x Approach To Testing (there’s a

full article on it here (

link

)).

Why the Mix Matters

-------------------

Incremental tests - the 10% tests - optimize what already works.

They optimize the creative + post-click experience, push down CPA

and cumulatively result in meaningful lifts over months. But by

themselves, they trap you on a local maxima; you get a little

higher on what might be a smaller mountain.

Big swings - the 10x bets - are where breakthroughs happen. They

test entirely new offers, new hooks or new audience approaches

that have the potential to double or triple your ad account’s

output. Most of them will fail or be flat. That’s fine. The goal

isn’t to hit 1.000; it’s to uncover the one or two bets a year

that make every prior incremental win feel small.

A sound testing strategy fuses the two. If your last 10 tests

were all micro-tweaks (headline phrasing, button color, minor

copy swaps), you’re due for a radical bet. If your last 13 tests

were massive swings, you should probably introduce a few

incremental tests to stabilize the gains.

The exact balance will vary based on any number of factors - the

level of maturity of your business (start ups and early stage

businesses are almost always better of placing the majority of

their effort on 10x tests; mature/late-stage businesses should

focus more on the 10% bets that unlock added efficiency on

already-massive spend), your level of product-market fit (if you

don’t have PMF, spend more on 10x tests), and your businesses’

adaptability (if you can’t quickly change bundles or service

offerings, then those 10x tests aren’t likely to be viable).

Designing a Testing Mix

-----------------------

Think of a quarter’s testing calendar as a portfolio. Mature

brands should bias about 60–80% of testing toward incremental

lifts (headline hooks, CTA copy, hero image changes, creative

concept refinements). The remaining 20-40% should be set aside

for transformational bets: new value propositions, long-form

storytelling, bundle or subscription shifts, landing-page

architecture changes, or audience expansion into entirely new or

untapped audiences.

Younger brands with less to lose can skew the other way — more

moonshots early on, because a single breakthrough often matters

more than marginal efficiency gains.

Guardrails for Both Types of Tests

----------------------------------

Whether it’s a 10% tweak or a 10x swing, a test is still a test.

It needs enough runway to prove or disprove itself. Too many

accounts declare winners after two days or with fewer than a

couple dozen conversions per variant. That’s not testing; it’s

glorified gambling.

For incremental tests:

* Run only two to three variations at once so each gets

meaningful delivery.

* Aim for at least 2,000 impressions and ~15 conversions per

variant - you don’t need statistical significance (we’re trying

to make money, not publish a paper), but you do need something

more than first day vibes. There’s plenty of room for a happy

medium between the two extremes.

* Let tests run at least a full week to capture weekday/weekend

behavior swings

For big swings:

* Accept that sample sizes will be similar, but be ready to

invest a larger budget to give them a fair shot.

* Treat a promising early signal as an invitation to run a

confirmation test against a fresh audience before scaling.

* When multiple variables shift at once (as often happens with

10x bets) document exactly what changed so you can isolate the

lever if it works.

Most brands I audit have no idea how to think about a 10% vs a

10x test, so here’s one example I often share: Imagine a bedding

brand (something we all know because we all sleep).

A 10% test might be as simple as swapping sterile product shots

for lifestyle imagery showing the duvet in a real bedroom,

expecting an 8 percent bump in CTR and a small CPA drop. A 10x

test could be bundling a duvet with pillows and bamboo sheets,

along with a “bedding for life” subscription (sending a new set

of tailored-to-the-season sheets every 90 days). That’s a

fundamental shift in both the offer and funnel. If the re-worked

idea hits, it would more than triple AOV and reduce new customer

payback period by 65% – something that no sequence of small image

or PDP tweaks would ever accomplish.

I recommend earmarking a fixed slice of the account’s monthly

spend specifically for testing. For many e-commerce brands that’s

10–20% of total budget; for high-consideration services it might

be slightly lower (10% to 15%). Within that slice, reserve

anywhere from 33% to 67% for big swings. The discipline of

allocating budget this way keeps testing from cannibalizing

evergreen performance campaigns, while still giving radical bets

the resources necessary to see if they have legs.

An effective Meta Ads audit should flag not just whether testing

exists but whether it’s balanced, funded and properly

instrumented to produce real learnings. Look for evidence that

the account runs on a testing calendar, that it has guardrails

for sample size and duration and that there’s a clear pipeline of

both marginal and breakthrough ideas. A brand that hasn’t tested

anything beyond incremental creative tweaks in 6 months is

sitting on hidden upside. A brand running only moonshots with no

steady 10% wins is either still in search of PMF or incinerating

money with little regard for progressively improving efficiency

(both bad).

Testing is how you find tomorrow’s growth engine before your

current one stalls out. Treat it like a portfolio: steady

compounding bets that keep you efficient, punctuated by the bold

explorations that can rewrite what your business/funnel is

capable of. The audit should leave no doubt about which side

you’ve been leaning on - and where you need to rebalance.

------------------------------

Landing-Page and CRO Alignment

------------------------------

No matter how brilliant your ad is, how perfectly you target it

or how sharp your offer seems, in my experience, 80%+ of the

impact happens after someone reaches your post-click experience.

I’ve sat with brands where every upstream component was pristine:

creative, targeting, infrastructure…and performance still tanked.

The culprit was always the same: the post-click experience.

A persuasive ad only wins the right to continue the conversation.

The landing page is the digital salesperson that must pick up

that conversation, translate the interest into attention, then

convert it into your desired action. If your lander is

disconnected, generic, slow, or confusing, you aren’t losing

potential sales/leads; you’re throwing away the resources you

spent to earn that attention + buy the click.

Here’s how I audit that experience:

1. Narrative & Expectation Alignment

------------------------------------

* The lander must mirror exactly what the ad promised - same

hook, same problem language, same emotional tone. If your ad says

“creative fatigue is killing your ROAS,” but the landing page

opens with “scaling e-commerce brands,” you’ve effectively reset

the narrative.

* Use narrative-specific landing variants: each ad angle (pain,

identity, urgency) should route to a slightly tailored lander

that nurtures the same story, not a catch-all generic page.

* Recognize that attention is fluid: users click, they skim, they

drift. If you don’t reinvest attention immediately with clarity,

trust signals, relevance and micro-convictions, you lose them.

2. Clarity Velocity: Move Users Through Four Questions Quickly

--------------------------------------------------------------

Your user’s journey on your lander should flow as naturally as

this sequence:

* What is this?

* Is it for me?

* Can I trust it?

* What should I do now?

Delay or confusion at any step kills momentum. The faster you can

move someone through those questions, the more sales (or leads)

you’ll earn. Pages that linger on “time on site” as a success

metric are often masking confusion, not engagement.

3. Friction: The Difference Between Taxing and Earning

------------------------------------------------------

Not all friction is bad. The trick is to remove unjustified

friction (surprise load times, broken forms, hidden terms,

unnecessary fees, whatever) while injecting framed friction that

creates value or weeds out non-serious visitors.

* Cognitive friction: confusion created by mismatched messaging,

complex layouts, or unclear hierarchy

* Emotional friction: uncertainty, skepticism, or fear of making

a mistake

* Mechanical friction: slow load times, janky forms, incompatible

mobile layouts

Great post-click experiences remove the first and manage the next

two. Offer gated quizzes, multi-step flows or micro-explainers

not to punish the user, but to earn their attention, commitment

and clarity.

4. Proof, Recognition & Trust Signals

-------------------------------------

The page must reflect “this is for you.”

* Show vertical-specific social proof and case studies near CTAs,

not scattered in footers. Proof + trust points often act as the

“final push” that gets your potential customer over the hump - so

concentrate their impact where it will be most powerful.

* Mirror the problem language from the ad. If your ad spoke to

“creative burnout,” the lander should use that same phrase, not a

diluted synonym.

* Use recognition, not shallow personalization. Don’t greet users

by name; show them you know their world. Talk to their pain

state. That’s what builds belief.

5. Behavior Diagnostics & Optimization

--------------------------------------

* Use scrollmaps, heatmaps, session recordings to see where

attention decays.

* Watch for long time on site with low conversion: often a sign

of confusion, not engagement.

* A/B test micro changes like anchor links, CTA progression

("Curious? Explore → Ready? Let’s go → Act now"), and narrative

reordering.

* Measure not just conversion, but “speed-to-understanding”: how

quickly does someone land, read a headline, see a value promise,

and know what to do next?

In short: in your audit, don’t just benchmark headline match and

load time. Probe whether the lander earns the attention your ad

bought. The performance delta rarely lies in brighter images or

button colors - it lives in how well the post-click experience

converts casual interest into real intent.

--------------------------------------------------

Seasonality, Promotions & Contextual Normalization

--------------------------------------------------

A frequent audit mistake is blaming creative or audience settings

for swings that were actually caused by promo calendars,

seasonality or inventory shocks.

Before diagnosing performance changes, I gather the brand’s promo

calendar, product-drop schedule, inventory notes, and any

external events that might have influenced buying patterns

(tariffs, port delays, recalls, etc.).

A prospecting CPA spike in October may be entirely predictable if

the brand historically spends light in late summer and ramps

heavily into BFCM promos. An evergreen lander A/B test during a

30% off sale tells you little about how that page will perform

when prices return to normal.

The audit’s job is to normalize for those factors so we don’t

over-correct for noise.

------------------------------------------

Competitive Landscape & White-Space Angles

------------------------------------------

Another under-used audit step is scanning what the competitive

set is actually saying and showing. Not to copy them, but rather

to identify the gaps they leave open.

I’ll spend time in Meta’s Ad Library pulling the top-spend

competitors’ active ads, analyzing how they position offers,

which creative angles dominate (UGC vs. polished lifestyle vs.

demos), and which incentives they lean on (financing, bundles,

shipping thresholds, proof points, third party credibility, VSLs,

etc.)

Done well, patterns emerge fast. Maybe every competitor leads

with discounts and nobody leads with value (or durability, or how

it actually works); maybe all of your competitors show the

product in static photos but none bother to show it in action.

Maybe every competitor uses % off or $ off discounts - leaving

you free to shift to a free gift with purchase or a charitable

giveaway that defies easy comparisons (and gives you an advantage

AND more margin).

If your audit ignores the competitive landscape, it’s likely

incomplete. No brand operates in a vacuum, and every brand has

competition (especially the ones that say they have none). When

you understand where your competitors are, you implicitly learn

where they are not – and that’s the area for the taking.

---------------------------

Future-Proofing the Account

---------------------------

Meta evolves faster than most businesses adapt. Privacy updates

cut off data streams. Advantage+ formats reshape campaign

structure. Compliance rules tighten with little warning.

A future-ready audit flags where the account is fragile and

shores it up before the next shift.

That means ensuring Conversion API is live, properly configured

and deduplicating browser + server events, not just installed. It

means passing back offline conversions for high-consideration

businesses so Meta isn’t optimizing for raw leads instead of

revenue.

For ecommerce, it means catalog and product feeds syncing in

near-real-time so you’re never paying to promote out-of-stock

SKUs or wrong prices.

And it means budget agility- the ability to shift spend quickly

for seasonal surges or inventory shortages without blowing up

historical learning.

I also recommend setting up anomaly alerts - whether in Meta’s

automated rules or third-party tools like Optmyzr - so you catch

sudden CPA spikes or broken pixel events before they drain a

week’s worth of budget.

----------------------------

The Metrics That Matter Most

----------------------------

Across all these sections, a few metrics rise above the rest for

audits focused on growth and efficiency:

• MER (Marketing Efficiency Ratio): total revenue ÷ total spend;

tells you if the business is healthier at higher spend.

• NC-ROAS (New-Customer ROAS): especially critical for

growth-oriented DTC brands

• Thumb-Stop Rate & Hook-to-Hold Rate: to diagnose whether video

ads earn attention.

• CTR-Link & CPM-New: reveal if creatives are breaking through

to fresh audiences or spinning on retargeting pools.

• Post-Click CVR & Bounce Rate: to flag lander or offer

friction.

• Cost per Add-to-Cart / Initiate Checkout: strong mid-funnel

signal even before purchases.

• Reach vs. Frequency Decay Curves: to watch for prospecting

saturation and creative fatigue.

I use these as early indicators before obsessing over last-click

ROAS, which can be distorted by attribution quirks.

This week’s issue is sponsored by Optmyzr.

------------------------------------------

Most performance marketers think of Optmyzr as a PPC platform

that reviews placements, optimizes bids and facilitates some cool

automations. But as Q4 ramps up, one of the most valuable tools

the platform offers is Anomaly Detection & Smart Alerts.

Inevitably, Q4 always breaks things. Pixels drop, feeds misfire,

CPMs spike in one geo while falling in another, Advantage+

suddenly over-indexes to retargeting… all while your team is

buried in promo launches. By the time someone notices, you’ve

already burned through thousands of dollars in wasted spend.

Optmyzr’s Smart Alerts act like a 24/7 analyst who never sleeps.

It learns your campaigns’ normal baseline hour-by-hour and flags

anything that looks off, before the little fire turns into a

raging, money-incinerating inferno. I’ve seen it catch a

product-feed sync error within hours on Black Friday morning,

saving a client an entire day of wasted spend.

You set the thresholds that matter: get a Slack ping if

thumb-stop rate on your best-performing creative drops 20%, or if

CPMs climb 15% in your highest-volume geo. That kind of early

signal is the difference between a quick tweak and a five-figure

problem.

If you still think of Optmyzr only as a PPC optimizer, Q4 is the

perfect time to rethink that. Their Smart Alerts for social ads

give you a real-time safety net, critical during the most

volatile (and expensive) quarter of the year.

-->Try Optmyzr For 14 Days Free (

link

)

Try Optmyzr For 14 Days Free (

link )

Auditing a Meta account isn’t about finding a magic setting

buried three clicks deep in Ads Manager. It’s about surfacing the

quiet misalignments - structural, creative, budgetary, post-click

- that silently drag performance down and keep the algorithm from

doing what it’s built to do.

If the first issue laid the foundation (the business, data, and

market context), this one focused on the engine itself: the

architecture that prioritizes spend, the creative that feeds the

machine, the testing discipline that uncovers the next growth

engine and the resilience required to weather the next

shift/storm.

What I’ve seen again and again is that performance rarely

improves because of a single clever tweak. It improves because

all these gears start meshing: budget flowing to the right

places, creative matching the right buyer with the right promise,

landing experiences carrying that promise through, and testing

that teaches the account how to get better month after month.

In the next issue, I’ll dig even deeper into creative strategy

and testing at scale—the place where most brands either stall or

break through. For now, if you take nothing else from today’s

audit checklist, take this: strong Meta performance is rarely a

mystery—it’s the by-product of well-aligned fundamentals executed

consistently over time.

Here’s to removing friction, unlocking headroom, and making Q4

your best quarter yet.

Cheers,

Sam

BONUS: If you liked this audit, here’s a 75-point checklist you

can use as you apply this framework to your own account (and yes,

I absolutely used Gemini to create this):

----------------------------

1. Business & Goal Alignment

----------------------------

* Confirm ICP and primary target segments.

* Identify most valuable vs. least valuable customers.

* Document core business objectives (growth, CAC, payback

period).

* Map revenue and margin by SKU/service line.

* Note seasonality, promo cycles, inventory or staffing

constraints.

* Capture geographic or channel priorities (markets, stores,

service areas).

---------------------------------

2. Data Infrastructure & Tracking

---------------------------------

* Validate pixel and/or CAPI implementation across all

properties.

* Test for duplicate events (e.g., form submits counted twice).

* Confirm deduplication logic for browser + server events.

* Check that all optimization events fire as intended (View → ATC

→ IC → Purchase/MQL/SQL).

* Verify conversion value accuracy and currency.

* Ensure event volume ≥ 25/week/ad set for stability.

* Reconcile Meta-reported conversions with backend orders/CRM.

* Confirm offline/qualified-lead passback via CAPI or partner

integrations.

* Audit feed freshness, price accuracy, inventory sync for

catalog/Adv+.

* Review privacy banners, GTM tags, or blockers that might

suppress signals.

-----------------------

3. Account Architecture

-----------------------

* Count active campaigns/ad sets and flag over-fragmentation (

* Identify legacy or “shanty-town” campaigns still capturing

spend.

* Check naming conventions—offer/product/audience should be

clear.

* Verify exclusions to avoid remarketing cannibalization of

prospecting.

* Segment campaigns around profit centers (hero SKUs, service

tiers, geos).

* Validate optimization goal alignment (e.g., purchases vs.

traffic).

* Compare prospecting vs. retargeting spend (aim ~80/20 for most

e-com).

* Ensure top 3 campaigns control ≥ 60% of spend.

* Confirm that test structures exist—either a dedicated testing

campaign or documented process.

-------------------------------

4. Budget Distribution & Pacing

-------------------------------

* Chart spend vs. NC-ROAS or incremental revenue by campaign.

* Flag any campaign using > 10% of spend but

* Identify high-efficiency campaigns budget-capped below

potential.

* Review daily vs. lifetime/rolling-7 pacing—test loosening daily

caps.

* Examine marginal return curves for each major campaign/geo.

* Check automated rules and bid strategies for alignment with

business goals.

-------------------------------

5. Attribution & Incrementality

-------------------------------

* Compare 1-day view vs. 7-day click vs. 28-day click results.

* Benchmark MER against in-platform ROAS for reality-check.

* Calculate NC-ROAS to detect double-paying for existing

customers.

* Identify campaigns skewing heavily to retargeting pools.

* Ensure any third-party attribution tool (e.g., Triple Whale,

Northbeam) pass data back into Meta.

-----------------------------------

6. Creative Alignment & Performance

-----------------------------------

* Review active creative concepts (goal: ≥ 6-10 live in

prospecting).

* Flag over-reliance on ≤ 5 ads capturing > 70% of spend in L90.

* Audit hook diversity: testimonial / demo / explainer / UGC /

lifestyle / static / carousel / VSL.

* Measure Thumb-Stop Rate (goal: ≥ 25% for most categories).

* Measure Hook-to-Hold (≥ 35-40% from 3-sec → 15-sec view).

* Track CTR-Link on prospecting (e-com goal: ≥ 0.8–1.0%).

* Track CPM-New to understand cost of fresh reach vs.

retargeting.

* Review ad-to-audience fit: is the opening hook relevant to that

segment?

* Check alignment of creative promise vs. lander above-the-fold.

* Verify ad copy and CTAs match specific offers/bundles shown.

* Check frequency decay - flag concepts fatiguing

* Review creative refresh cadence (ideally every 7–14 days).

* Evaluate testing pipeline - documented hypotheses per

angle/theme.

* Confirm creative tracker tags angles/themes so learnings are

actionable.

---------------------

7. Landing-Page & CRO

---------------------

* Test page-load speed (

* Confirm the headline mirrors the ad hook exactly.

* Ensure immediate clarity on: What is it? Is it for me? Can I

trust it? What next?

* Check CTA visibility above-the-fold on both desktop & mobile.

* Map scroll-depth vs. CTA placement; test anchor-link CTAs.

* Audit form UX: fields, validation errors, mobile friendliness.

* Examine friction types—cognitive, emotional, mechanical.

* Validate trust elements (reviews, guarantees, UGC, 3rd-party

badges) near CTAs.

* Check that promo messaging on lander matches the live ad

flight.

* Track Post-Click CVR (e-com: ≥ 1-2%; lead gen: ≥ 2-3%).

* Monitor bounce rates—identify > 60% as a friction flag.

* Use scrollmaps/heatmaps/session recordings to locate drop-off

zones.

---------------------

8. Testing Discipline

---------------------

* Confirm a written testing calendar or pipeline exists.

* Check budget allocation to testing (e-com 10–20% of total).

* Ensure mix of 10% incremental vs 10x moonshot experiments.

* Limit concurrent test variants to 2-3 for stat power.

* Verify minimum sample sizes (~2k impressions & ~15

conversions/variant) and 7-day run.

* Confirm documentation of learnings & confirmation-test process

for promising winners.

-------------------------------

9. Competitive & Market Context

-------------------------------

* Review top-spend competitor ads via Meta Ad Library.

* Note common hooks/offers competitors emphasize vs. ignore.

* Identify whitespace angles (value, durability, VSLs, gifting,

etc.).

* Check brand positioning relative to competitor promo patterns

(discount vs. bundle vs. value-add).

--------------------------------

10. Future-Proofing & Resilience

--------------------------------

* Confirm CAPI deduplication & event-parameter passback is

healthy.

* Verify product-feed sync frequency and error monitoring.

* Set anomaly-detection alerts (Meta rules or Optmyzr) for

CPA/CTR/CPM spikes & feed breaks.

Loving The Digital Download?

Share this Newsletter with a friend by visiting my public feed.

---------------------------------------------------------------

-->View the Newsletter Feed (

link

)

View the Newsletter Feed (

link )

Follow Me on my Socials

(

link

) (

link

) (

link

)

1700 South Road, Baltimore, MD 21209 | 410-367-2700

Unsubscribe (

link

) | Manage Preferences (

link

)

Want to read more from Sam Tomlinson?

Join Ads to AI to get full access to all 67 articles plus 500+ more from top AI and marketing thought leaders.

Join Ads to AI →