(
)
***********************
Happy Sunday, Everyone!
***********************
I trust you’re enjoying the first real hints of fall - the cooler
mornings, football weekends and the final few days of Q3 (hard to
believe October begins this week). For most brands, the next 90
days will decide how 2025 ends. And for those brands, Meta
remains the single most powerful lever available to them
finishing what has been a volatile, unpredictable and all-around
rough year strong — but only if the account is set up and managed
the right way.
In the past few weeks, I’ve had a steady stream of brands reach
out for guidance, consulting, and audits, almost all focused on
their Meta Ads accounts. That spike in requests isn’t random; it
happens every year. And it happens because brands know that when
Meta underperforms, it’s rarely the algorithm’s fault; it’s
almost always because something fundamental is missing,
misaligned or broken.
There’s no two ways about it: Meta is the greatest
demand-creation and demand-capture platform in the history of
modern commerce. A strong product paired with a well-run Meta
strategy can unlock extraordinary, profitable growth. And there’s
no time when that matters more than right now - in the weeks
leading into BFCM. For most B2C brands, this is the make-or-break
window. Nailing Meta in the next 90 days is the simplest way to
recover from the headaches that plagued you earlier in the year -
the tariffs, soft demand, stockouts, whatever.
If you’re in that boat, my recommendation is simple: audit your
account. Either have someone you trust do it or roll up your
sleeves and do it yourself. Start by understanding where things
have gone off-track, then move to incremental optimizations and
finally look for the blue-ocean opportunities hiding behind the
noise.
To make that easier (and to help you get started) I’m dedicating
the next two issues to my six-part Meta Ads Audit framework.
This issue covers the foundation: business context, fundamentals,
and data infrastructure — the places where most unseen problems
live. Next week, I’ll build on that with architecture, creative
and testing — the levers that turn the insights you uncovered
this week into performance gains.
Let’s dive into the first 3 pillars:
-------------------------------
Pillar #1: Get Clear On The Why
-------------------------------
Every successful audit starts by knowing why it’s needed. Without
that, even exceptional findings miss the mark. Over the last few
years, I’ve reviewed hundreds of ad accounts, from small spenders
to million-dollar-per-month-plus behemoths. Regardless of account
size or industry, I can categorize the “why” behind every audit
into one of five buckets:
* Poor Performance: the most common (and the most obvious) - the
account just wasn’t performing to the level expected.
* Stalled Growth: the second most common issue - the account was
performing exceptionally well at a certain level of spend, but
efforts to go beyond the current performance volume consistently
failed.
* Right People / Right Seats: this is the classic “I’m not sure
if the person who got us from 0 to 1 can take us from 1 to 100”
audit. There’s nothing actively wrong (in fact, things are
usually going well!)...but the brand wants to know if there’s
more that could be done. This is largely a “right people in right
seats” question - are the people running the account the right
ones to unlock new levels of scale? Is the structure in place
conducive to that scale? What else needs to happen/change for
that scale to be possible + profitable?
* Data Discrepancies: typically, this breaks down into one of two
flavors - either the platform data looks sublime but the bank
account looks like a horror show, or the ad account looks
horrific but the brand is printing cash. Neither is good, though
the latter is certainly preferable to the former. Either way,
something doesn’t add up.
* Second Opinions: These sound innocuous, but they’re (almost)
always caused by something - a new CMO is brought on board, a CEO
has a bad feeling, etc. No matter what tips that first domino,
the last domino is always the same: someone wants a second
opinion.
Before opening the ad account, it is essential that you
understand the root cause behind the audit. This enables you to
ensure that the output aligns to the rationale. There’s nothing
more frustrating for a brand owner than commissioning an audit
due to poor performance, only for the person to come back and
tell you everything’s fine. Clearly, it’s not. Something is wrong
(else, they would not have hired you!). That something may not be
in the ad account, but it exists.
It’s very likely you’ll find other problems as you go through the
audit. Some of those problems might be more severe than the one
you were brought in to solve. But finding all of that doesn’t
matter if you can’t connect what you found to why you’re here in
the first place.
As I’ve done more of these audits, I’ve come to the conclusion
that Jeff Bezos was absolutely correct when he said, “When the
data and the anecdotes disagree, the anecdotes are usually
right.” That may be infuriating to hear as a data-driven
marketer, but it’s no less true. The most successful and
effective audits are the ones that identify why the data +
anecdotes appear to be in conflict, then provide a path that
unifies them while moving the business forward.
That last bit is vitally important - an audit is only as good as
the go-forward auctions it enables. No one cares if you can tell
them what’s wrong IF you can’t tell them how to fix it. Lead with
solutions. Prioritize next steps. Make it easy (or, at least,
intuitive) for them to fix the problem that led to where you are
today.
This week’s issue is sponsored by Optmyzr.
------------------------------------------
Speaking of making the mechanics easier… Optmyzr has long been
known for PPC automation, but its social platform does the same
heavy lifting for Meta and LinkedIn.
Most marketers hear Optmyzr and think “PPC tool for Google
search.” Fair. That’s where it built its reputation. But what
almost nobody realizes is that the same team quietly built a
phenomenal social ads platform—one that deserves the same
attention their PPC suite gets.
Optmyzr Social brings the same discipline, automation, and
control to Meta and LinkedIn that performance teams have long
relied on in search. It strips out the repetitive grunt
work—launching campaigns, adjusting bids, scaling budgets—so
running social stops feeling like a second job.
From there it gets better: a single dashboard pulls every account
and campaign into one clean view, so you don’t have to live
inside multiple Ads Managers. Real-time alerts flag shifts in CPA
or CTR before they become expensive. Rule-based automations let
you pause under-performers, feed more budget to winners, and keep
performance guardrails in place 24/7.
If you’ve only thought of Optmyzr as a PPC tool, it’s worth a
second look. The social platform makes managing Meta and LinkedIn
ads as straightforward—and as automated—as search has become.
Try Optmyzr Social free for 14 days or book a demo to see what
you’ve been missing.
-->Try Optmyzr For 14 Days Free (
)
Try Optmyzr For 14 Days Free (
link )
------------------------------
Pillar #2: The Business Itself
------------------------------
You can not produce a high-quality audit of a Meta Ads Account
(or a Google Ads Account, or any other ads account) if you do not
understand the underlying business. Every ad account (even the
great ones) is an imperfect reflection of the business it
promotes, with the level of dissonance between the account +
business proportional to the overall health of the ad account.
That’s a fancy way of saying: the tighter the connection between
the ad account and the underlying business, the more likely it is
that the ad account is performing well.
Before diving into the ad account, you should be asking four main
types of questions:
Type 1: Business Goals + Constraints
------------------------------------
* What is your ICP / primary target audience?
* Who are your most + least valuable customers/clients? Why?
* What is your primary goal for the account?
* What’s your target CAC by Service/SKU
* What’s your LTV by Service/SKY
* What is your target payback period?
* Are there other constraints or considerations (inventory,
location, etc) that can inhibit scaling or should be factored
into a review?
* What are your current budgets? Are these fixed or flexible? If
so, how flexible?
Type 2: Historical Performance + Trends
---------------------------------------
* What is the history of your account (basically - how did we get
here?)
* What are your primary concerns about the account?
* Does seasonality influence your business? If so, how? Are
certain SKUs/Service Lines more prone to seasonality than others?
* What are your promo + product cycles (if any)?
* Are there any other externalities that have impacted the
performance of your account (interest rate changes, tariffs, port
shut downs, recalls, etc.)?
Type 3: Infrastructure + Data
-----------------------------
* What is your current tech + data stack?
* Do you use a landing page builder? TPA tool? Incrementality
tool?
* How often is your tech stack or data stack changing?
* What other tools do you use to support your marketing?
* What data do you collect? Where is it stored?
* How often is your CRM/CDP updated/cleaned?
* What other sources feed data into that CRM/CDP (do you buy
lists? Do you receive lists from partners - such as conference
organizers, co-marketers, etc)?
* Is there a current ads + landing page repository?
Type 4: Customer + Offers
-------------------------
* Who are your direct + indirect competitors?
* What are your ICP’s current alternatives to your solution (and
one of these might be “do nothing”)
* How is your company/brand different from your competitors?
* What are your UVPs/USPs?
* What is your brand DNA/core values / ethos?
* How do most of your customers hear about you?
* How does your sales/intake process work (if applicable)? Can we
test it?
To be very candid, this should feel like the business equivalent
of a proctology examination, because that’s exactly what it is.
Just as a doctor will commission an ungodly number of tests, or a
great lawyer will ask you highly personal, borderline offensive
questions, or an accountant will ask to see every receipt, bank
account statement, credit card statement, invoice and document….a
great marketer will want to know as much as possible about your
business before they begin.
Once you have that information, the next step is getting crystal
clear on the industry/space. The real leverage comes from knowing
the market you’re auditing. Before I ever open Meta ads manager,
I spend hours conducting company + competitor research. The goal
of this is to create a picture of the demand, constraints and
economics that shape what “good” looks like for the brand in
question. Here’s what I gather up-front:
Keyword & Audience Intelligence
-------------------------------
Even if we’re not running search, keyword research tells us how
customers describe their own needs - and which phrases signal
buying intent versus curiosity. The more you understand about the
target audience, the better you’ll be able to evaluate creative,
offers, messaging + landers. Pair this with audience-level
research (affinities, purchase triggers, proxies for income or
lifestyle) so that when we review Meta targeting, we know whether
those segments make sense.
Customer / Client Feedback
--------------------------
I’m continually shocked at how often customer feedback, reviews,
ratings and other third-party validation/credibility is
overlooked in ad account audits. This often manifests itself in
the form of actual customer testimonials being ignored in ad and
landing page copy. If your actual, real-life customers are
telling you, “We went with your company for X and Y reasons,” or,
“Your brand is the best in the world at Z,” – that’s absolutely
incredible information.
From an audit perspective, understanding how the audience
perceives the underlying brand/product/service is an invaluable
data point when assessing creative and landers.
The same is true for credibility/trust factors: if you’re a
challenger or upstart brand, every customer/prospect
subconsciously risk-adjusts your offering, because buying your
widget or going with your company poses a risk relative to using
the known and /trusted brand. Reviews, Ratings & Third-Party
Validation (awards, press, etc.) can reduce the perceived risk.
If a finding from your audit is that conversion rates are lower
than you’d expect, and you find that trust factors aren’t
prominently featured in ads and/or landers, you can make an
informed observation.
Competitor Landscape
--------------------
No brand advertises in a vacuum. There’s ALWAYS competitors - the
question is whether or not you recognize them and do your
homework on them. Before I open the ad account, I want to know
which competitors are advertising, how they’re positioning, the
angles they lean on in creative and where they’re active. This
isn’t about trying to mimic their strategy or copy their
creative; it’s about spotting gaps and differentiators that my
client can exploit and identifying areas where we are simply not
competitive (every client will ALWAYS tell you they’re the best
at everything; that’s (almost) never the case).
Seasonality, Promotions & Product Drops
---------------------------------------
An audit that compares a non-promo month to a big sale month (or
Q1 to Q4) will mislead you every time. Before you jump into the
ad account, collect the brand’s promo calendar, product-drop
schedule and any historic sales patterns. This allows you to
normalize performance swings before assessing the account.
Unit Economics & Margin
-----------------------
It is absolutely imperative that you know the actual thresholds
for profitable growth - product/service -level margins, shipping
and returns impact, contribution margin, rework rates, etc.
Fundamentally, this is what defines acceptable CAC or ROAS.
Having these numbers up front allows for an unbiased, clear-eyed
evaluation of the account performance.
Sales-Cycle & Lead-Quality Data
-------------------------------
For higher-consideration products and services, it’s critical to
know the initial qualification rate (lead to MQL/SQL), the
expected time between each stage (lead to MQL, MQL to SQL, SQL to
resolution) and what percentage of qualified inbound ends up
closing. If there’s a 30-day lag between lead and closed/won,
that must be factored into the evaluation of a campaign AND the
measurement setup used in the account.
Geo-Market Economics
--------------------
One of the more counterintuitive things in advertising is dealing
with situations where the smart move is to spend “inefficiently”
on advertising due to geo or market economics. As an example: an
ad account I recently reviewed showed sky-high customer
acquisition costs in certain geos. The initial reaction (and what
most marketers would have said) was to cut spending in those
areas and re-allocate to others. What they would have missed is
that this business had exceptionally high fixed costs per
location AND an inability to re-allocate personnel to other
areas.
This business had about $120,000 per month in per-location fixed
costs (rent, salaries, trucks, insurance, regulatory permits,
etc.). The CAC in one particular market was $3,000/customer (vs.
$1,250 elsewhere). Each customer’s net revenue (gross revenue -
COGS) was ~$10,000. As counter-intuitive as this sounds, the
optimal solution for this brand was to spend about $30,000 in
this market.
Why?
Because this acquires 9-12 customers per month, which off-sets
about 60% of the location’s fixed costs and keeps the team there
busy but not overwhelmed.
And, you’re probably wondering, why not just spend ~$54k to have
the location break even?
Answer: because re-allocating the final $24k ($54k to break even
- $30k spent) to other markets with much lower CACs AND capacity
is optimal from an enterprise perspective. In those other
markets, the $24k will drive ~19 customers - about the number
those other locations can serve and contribute ~ $166,250 in net
revenue to the enterprise. The end result is a net gain of
$116,250 ($166,250 in gain from the other markets + ($50,000)
loss from the expensive market).
Great media allocation (exactly what ever audit should do) isn’t
just about channel metrics (like CAC or ROAS) – it is about how
ad dollars interact with the underlying business (fixed +
variable cost structures, capacity limits, marginal returns,
organizational priorities) across markets + segments.
Historic Pricing, Offer & Promotion Strategy
--------------------------------------------
I want to understand how the brand has used bundles, discounting,
financing or shipping thresholds in the past. What promos were
run? How do those compare to what competitors have (or are)
running now? Changes in offer structure often explain sudden CVR
swings that get wrongly blamed on creative or algorithm changes.
Product/Service & Inventory Roadmap
-----------------------------------
Upcoming launches, seasonal SKUs, or known stock-outs influence
demand curves and CTR/CVR patterns. This context helps us
distinguish between true performance changes and shifts caused by
merchandising. The same holds true in service-based accounts – a
just-introduced service will likely have lower demand than a core
service, which can skew an evaluation of the account.
Channel-Mix & Halo Effects
--------------------------
This is difficult to do, but it is essential to understand how
Meta interacts with other traffic sources like paid search,
email, SMS, influencer, retail/wholesale. A stable MER with
rising Meta spend can signal a healthy halo effect on branded
search and direct traffic even if Ads Manager under-credits it.
Customer Sentiment & Category Perception
----------------------------------------
One of the major drivers of ad account performance is customer
sentiment + category perception. Every brand will tell you things
are great - but that often fails to fully map onto reality.
My solution: use customer sentiment and category perception as a
market-intelligence scan rather than just a source of creative
hooks. Before I ever open Meta ads manager, I want to know how
the audience perceives the entire category, how they talk about
each competitor, what they believe is table stakes versus what
feels truly differentiating, and which frustrations or myths
dominate the conversation.
That context is essential when evaluating whether the account’s
current messaging is swimming with the current or against it,
whether the offer addresses the real objections buyers have and
how much of the performance gap is likely due to creative
misalignment versus structural or budget issues.
By gathering this context first - through reviews, Reddit,
forums, earned media and competitor chatter, I can start the
audit knowing the real purchase drivers, perceived barriers and
trust factors in the market. This shapes how I interpret metrics
later: a low CTR may be a creative/market mismatch, not an
algorithm issue; persistently low conversion rates might reflect
a credibility gap the ad account data simply won’t show. The
sentiment review grounds the audit in the real market forces
shaping outcomes, not just what the dashboards record.
Regulatory & Compliance Factors (as relevant)
---------------------------------------------
In categories like health, finance, alcohol, or legal, note any
ad-policy or disclosure constraints up-front so recommendations
stay realistic. There are few things more annoying to a brand
than getting audit recommendations they are legally prohibited
from implementing.
There’s no two ways about it - this is a TON of work up-front. It
is not easy. But the benefit of having both the business
understanding AND the competitive/market analysis is the frame it
provides.
---------------------------------------
Pillar #3: Data Collection + Management
---------------------------------------
For all the talk about creative angles, bid strategies, and
funnel hacks, the simple reality of Meta advertising today is
that data is the single most impactful performance lever you can
control.
Meta’s machine-learning algorithm thrives on rich, accurate and
timely conversion signals. If those signals are missing, late or
wrong, the platform is optimizing blind. No amount of clever
audience stacks, scroll-stopping UGC or brilliant landers will
overcome that at any level of scale.
A remarkable Meta audit must begin with the data infrastructure
that powers the entire system. As the old saying goes, “Garbage
in, garbage out.”
Meta’s algorithm is designed around optimization event feedback
loops: the system learns which impressions led to meaningful
outcomes and analyzes billions of user + system-level data points
to determine what factor(s) are most likely to contribute (or
detract) from positive events going forward. It then uses all
that data to inform its probabilistic model of the expected value
of each subsequent impression. That determination, in turn,
controls bids + delivery in near-real-time.
As wildly impressive as all that is, it is absolutely worthless
if:
* Conversions aren’t tracked at all (or are tracked twice)
* The event fired doesn’t match the business outcome
* Value parameters are missing or incorrect
* Post-click sales, subscriptions or lead-qualification outcomes
never get passed back
When any of those things happen, the algorithm gets the wrong
reinforcement signal and starts rewarding the wrong users or
behaviors. The end result of that is higher CPAs, volatile ROAS
and scaling stalls - not because the creative is bad, the
audience is wrong or the market changed, but because we decided
to run a Ferrari (Meta) on used cooking oil instead of 93 octane.
Step 1: Review Conversion Tracking
----------------------------------
Your optimization events are the single-most-impactful data you
pass to Meta. The first thing you should validate is that every
conversion signal sent to Meta is intentional, unique and
accurate.
Key checks include:
* Duplicates or irrelevant events: the most common issue I see.
Example: “newsletter sign-ups” or “career submissions” or
"partnership inquiries” firing the same “Lead” event Meta
optimizes for. This misguides the algorithm to chase cheap,
low-value actions instead of the leads you actually want.
* Event-to-Journey Alignment: each stage of the buying journey
(view content, add to cart, initiate checkout, purchase) should
have its own discrete event. The same is true for leads - fire an
event for form start, form submit, MQL, SQL, Closed/Won,
Closed/Lost. Having this data in Meta ads manager allows you to
understand the full-funnel, full-lifecycle impact of your
marketing. It also prevents disintermediating the account
management from the data in-view. I’ve reviewed far too many
accounts where optimizations + budgetary decisions are made based
on data not in Meta’s view (i.e. pausing an ad that appears to be
driving fantastic results, because the leads driven from that ad
are all DQ’d in the CRM). When that happens, Meta has NO IDEA why
you’ve made that decision - so it can’t improve.
* Validate Conversion Values: if purchase or deal values are
passed, test that they match order totals (including or excluding
tax/shipping as intended). Meta’s value-based bidding needs clean
numbers.
* Technical blockers: outdated pixel code (should be resolved if
you use GTM or the native Shopify integration), cookie-consent
banners or GTM errors often prevent events from firing.
Test-event tools and order-to-event reconciliation uncover these
silent failures.
In my experience, roughly 1-in-3 accounts have at least one major
conversion tracking or passback mistake. Even if you think
everything is set up correctly, it’s often worth double checking.
Step 2: Verify Conversions API (CAPI)
-------------------------------------
CAPI is Meta’s server-side bridge that supplements or replaces
browser-pixel events. It boosts match rates for users on iOS or
using ad-blocking browsers, stabilizes reporting and makes
bidding models more resilient to fuzzy data situations (i.e.
cross-device conversions, ad blockers, dropped FBCLID parameters,
etc.).
Almost every business will benefit from a proper CAPI
implementation. Here’s what to look for:
* Confirm that CAPI is implemented - many smaller accounts still
haven’t done it. Meta has a native integration with Shopify, and
for lead gen accounts, Zapier is absolutely wonderful.
* Validate deduplication logic so browser and server events for
the same optimization event aren’t double-counted. Meta’s system
will do this by default IF you pass both the event name AND event
ID parameters from both the browser + server (CAPI) events.
* Make sure all critical parameters (event value, currency,
content IDs, product SKU, service, location, etc.) are being
passed.
Accounts with a proper CAPI implementation often see 5-15% more
conversions in the ad account – these aren’t net-new (it’s not
like you weren’t getting them before - you were! You just didn’t
know where they came from), BUT they do provide Meta with
significantly more data for optimization.
Step 3: Pass Back Post-Conversion Outcomes
------------------------------------------
In lead-gen, subscription and other high-consideration
categories, counting every form submission as a conversion is
misleading. Most accounts should optimize for qualified leads or
closed customers, not for every raw lead that comes through the
site.
Here’s what I look for in each post-conversion review:
* Map CRM/CDP data to identify which leads became
MQLs/SQLS/paying customers.
* Confirm with your sales/customer success team that data in the
CRM is updated as quickly as is humanly reasonable – I’ve worked
with far too many companies where sales didn’t bother to update
the CRM until the end of the month (commission time)...which
royally screwed over the marketing team because by the time that
update happened, the attribution window had closed.
* Ensure that those outcomes are pushed back to Meta as custom
conversions (with values where possible).
* Confirm time stamps and primary keys (email, phone, click ID
(FBCLID) and/or event ID) so Meta can connect the downstream
event to the original click.
This step usually drives the largest ROI increase for
service-based businesses because it aligns spend with high-LTV
buyers rather than raw lead volume.
Step 4: Review Feeds and Linked Data Sources
--------------------------------------------
For ecommerce brands running catalog or Advantage+ Shopping
campaigns:
* Check the freshness of product feeds. There’s no excuse to not
have real-time (or, at worst, daily) syncs. One of the biggest
culprits behind wasted spend in accounts is delayed product
updates, which results in spend continuing to flow to
out-of-stock or inaccurate SKUs. The same is true for
service-based businesses (esp. ones with capacity constraints) –
if you have no appointments available for a month in a particular
geo, why are you spending money advertising there?
* Review field mapping for errors such as SKUs with out-of-stock
popular variants (nothing quite like advertising a SKU where you
only have XS and XXL in-stock) or missing GTIN/price data that
limits delivery.
* Verify that inventory and price changes flow through
automatically and quickly.
The bottom line: a delayed feed or broken catalog sync will
obliterate account performance…even when campaign + ad set
settings look fine. It will make otherwise exceptional creatives
look terrible. It will trick Meta into serving the wrong ads to
the wrong people.
Step 5: Data Management + Governance
------------------------------------
Data infrastructure is often referred to as plumbing - and for
good reason: that’s basically what it is. Unsexy, boring,
plumbing. But getting that plumbing right is only half the battle
- a flawlessly designed data infrastructure is worthless if the
data moving through it is corrupted or wrong.
In my experience, this is the most nefarious issue that impacts
Meta ads accounts, and 9/10 times, it is missed. Just this year,
I’ve seen:
* Sales reps retroactively re-qualify leads in order to hit
quotas (pro tip: don’t bonus your salespeople based on how many
leads they turn into SQLs, and DEFINITELY don’t make it a
competition so they all do it at the end of the month to try to
win).
* Old or duplicate records remaining in the system (bonus points
if you send the same product twice for one order b/c you didn’t
deduplicate)
* Different business units using different CRMs…but not linking
them so cross-sells were never counted (yay!) and LTV numbers
were WILDLY off for certain user segments.
Identifying these issues is NOT easy. Here’s where I start:
* Review CRM audit logs for frequent retro-edits (that’s how I
found the salespeople thing)
* Compare Meta-reported purchases to fulfilled orders to detect
gaps
* Speak directly with business unit leaders to learn how leads
are logged and updated day to day
* If different CRMs are used by different segments/business units
(yes, it happens), actually compare the two!
* Actually do the stuff - purchase things. Submit lead forms.
Schedule meetings. Subscribe. With real money. Then track how
your data flows through the system.
The particularly persnickety thing about data management /
governance issues is that a technical fix alone is usually
insufficient – you also need to have the client (or people on the
client’s team) change their behavior. Depending on what needs
changed, that can be both difficult to do AND make you persona
non grata with some of your client’s employees (like the
salespeople who suddenly don’t get their bonuses just for
changing Lifecycle Stage in HubSpot to SQL). It’s not fun, but
it’s essential.
Recognizing Data-Driven Performance Problems
In almost all of the “poor performance” or “unable to scale
audits”, clients complain of some (or all) of these issues – and
are convinced they are a product of something in the ad account:
* Sudden spikes or drops in CPA/ROAS with no creative or budget
changes
* Spend skewing toward low-value campaigns after new events were
added
* Campaigns stuck in “Learning Limited” despite adequate budgets
* Inconsistent revenue numbers between Meta and Shopify/GA4/CRM
* Strong CTR, CVR + AOV but inability to scale
The reality? Almost all of them have data gaps or mismanaged data
infrastructure as their root cause, not campaign settings, ad set
targeting or creative. If you don’t check data first, you’ll end
up chasing ghosts around the ad account for hours (or days, or
weeks) – and be no closer to an actual resolution.
Here’s the exact checklist I use when reviewing Meta Ads / CAPI:
* Gather documentation for pixels, APIs, feeds and CRM
integrations
* Review the current Google Tag Manager (or other Tag Manager)
* Test event firing using Meta’s Test Events, GTM’s Preview and
server logs
* Reconcile a sample of orders or leads against Meta’s logged
events
* Validate values and parameters (currency, SKUs, order totals,
event ID)
* Inspect CRM change logs
* Interview ops/sales teams
* Deploy fixes and set up automated QA checks to prevent
regression
For most audits, my deep-dive into the brand, the
audience/competitor research + data infrastructure review
comprises >50% of the total time. That’s intentional. Those three
things account for a majority of the results observed in the ad
account AND they’re often the least-reviewed. The net-net: these
seemingly minor, annoying, boring things have the highest
probability of uncovering things that were missed by everyone
else - which are (likely) leading to the results the client is
seeing/feeling.
Again, when the data and the anecdotes disagree, the anecdotes
are usually right – because they’re picking up on something that
your data is not. All we’re doing here is removing the blockers
and allowing the data + anecdotes to tell a similar story.
Next week, we’ll get into the ad account – focusing on the
account structure, audiences, creative + testing strategy.
Until then, have a great week!
Cheers,
Sam
Loving The Digital Download?
Share this Newsletter with a friend by visiting my public feed.
---------------------------------------------------------------
-->View the Newsletter Feed (
)
View the Newsletter Feed (
link )
Follow Me on my Socials
(
) (
) (
)
1700 South Road, Baltimore, MD 21209 | 410-367-2700
Unsubscribe (
) | Manage Preferences (
)