Every week, somewhere in DTC, a brand is celebrating a great ROAS number. Their Meta-reported ROAS is 4.2x. The team is happy. The founder is reassured. And the brand might be quietly losing money.
ROAS — return on ad spend — is the metric that gets reported in every agency pitch deck, every weekly update, every performance review. It's the number everyone knows how to calculate and the number almost nobody actually trusts when they're being honest. We don't trust it. Here's exactly why, and here's what we use instead.
The Three Reasons ROAS Fails as a Decision Metric
Reason 1: It's Platform-Reported, Which Means It's Self-Reported
When Meta tells you your ROAS is 4.2x, Meta is calculating that number based on the conversions Meta takes credit for. Meta has a financial incentive to take credit for as many conversions as possible — because a higher ROAS makes the platform look effective, which justifies your continued and increased spend.
The default Meta attribution window is 7-day click, 1-day view. That means Meta claims credit for every purchase made within 7 days of an ad click, and every purchase made within 1 day of an ad view — even if the customer also clicked a Google search ad, opened an email, and navigated directly to the site in between. Every other channel that participated in that journey is claiming the same conversion. Add up all the platform-reported ROAS numbers and you'll get a total attributed revenue that frequently exceeds your actual Shopify revenue by 50-150%.
This isn't fraud. It's the structural reality of last-touch, single-platform attribution. But it means the number you're using to make budget decisions is systematically inflated, in a direction that benefits the platform reporting it.
Reason 2: ROAS Ignores Your Cost Structure
A 4x ROAS sounds great. But whether 4x ROAS is profitable depends entirely on your cost of goods sold, fulfillment costs, platform fees, agency fees, and operating overhead. A brand with 30% blended margins needs roughly 3.3x ROAS just to break even on ad spend — before accounting for any of the other costs of running the business. A brand with 60% blended margins can be healthy at 1.7x ROAS.
ROAS strips out all of that context. It tells you the ratio of revenue to ad spend. It doesn't tell you whether you're making money. Two brands with identical 4x ROAS can have completely different business health — one profitable, one burning cash — based on differences in their unit economics that ROAS doesn't capture.
When we ask a new client what their target ROAS is, the answer tells us whether they've thought through the unit economics or whether they're optimizing toward a number that feels good but might not correspond to profitability.
Reason 3: ROAS Can't See What You're Not Spending On
Here's the scenario that kills brands. You have 4x ROAS on Meta. You increase spend 50%. ROAS compresses to 3.2x. Conventional wisdom says you're hitting diminishing returns. So you pull back to the level where ROAS was highest.
What you might have missed: the 50% increase in Meta spend was also filling the top of the funnel for branded search, driving YouTube views that increase brand consideration, and generating social proof via purchases that organic content then amplified. When you pulled back the spend, those secondary effects also pulled back — but they showed up in your Google and organic channels, not in Meta's ROAS. Meta's reported efficiency looked better. Your actual business grew more slowly.
ROAS optimized in isolation has a terminal outcome: you scale back to the most efficient part of the audience (your warmest retargeting pools), performance looks great, and you stop growing. You've optimized yourself into a ceiling.
"A brand can have 4x ROAS and be losing money. A brand can have 1.8x ROAS and be building a durable, profitable customer base. ROAS doesn't tell you which one you are."
What to Track Instead: The Real Measurement Stack
Metric 1: Blended MER (Marketing Efficiency Ratio)
Total revenue divided by total marketing spend. No attribution modeling. No platform credit. Just: what did we make, and what did we spend to make it?
MER is the honest version of ROAS at the business level. It captures every dollar of revenue and every dollar of marketing cost, without trying to attribute specific sales to specific channels. It doesn't tell you what's working inside the marketing stack — but it tells you whether the marketing stack as a whole is working.
Your target MER depends on your unit economics. Calculate it: take your blended gross margin percentage. Divide 1 by it. Add 1. That's roughly your break-even MER (the MER at which ad spend covers its own cost of goods). A profitable MER should be meaningfully higher — how much higher depends on your CAC payback targets and growth rate goals.
Know your MER target. Know your current MER. Know your MER trend over the last 90 days. Everything else in your measurement stack provides context for those three numbers.
Metric 2: New Customer CAC (nCAC)
Customer acquisition cost calculated on new customers only. This is the metric ROAS obscures most dangerously. A high ROAS can be driven almost entirely by returning customer purchases — your retargeting ads are converting existing customers who were going to buy anyway, and the platform is reporting the full revenue against your ad spend.
nCAC isolates what you actually paid to acquire someone new. This is the input metric that determines the health of your growth engine. If nCAC is stable or improving as you scale, your acquisition is efficient. If nCAC is climbing while ROAS looks fine, you're growing a retargeting-heavy account that looks great on paper and isn't actually expanding your customer base.
To calculate properly: take ad spend attributable to new customer acquisition (your prospecting campaigns), divided by new customers acquired in that period. Not total revenue. Not all conversions. New customers acquired.
Metric 3: CAC Payback Period
How many days until the average new customer's cumulative revenue exceeds the cost to acquire them? This is the metric that connects acquisition efficiency to business health in a way that ROAS and nCAC alone cannot.
A brand with a $60 nCAC and a 30-day payback period is in a very different position than a brand with a $45 nCAC and a 120-day payback period. The first brand can scale aggressively and see returns quickly. The second brand needs more working capital to fund growth and is more exposed to customer churn before payback is achieved.
Payback period also tells you how much you can sustainably spend to acquire a customer given your cash position. It's the bridge between acquisition metrics and financial planning. Most brands don't calculate it. Every brand should.
Metric 4: Incrementality (The Gold Standard)
Incrementality testing answers the only question that matters in attribution: what percentage of the revenue attributed to this channel would have happened anyway, without the channel? The difference is the true incremental value of that spend.
Well-run incrementality tests on Meta typically show that 20-50% of attributed conversions are non-incremental — purchases that would have happened via another channel or direct purchase without the paid ad. This means true incremental ROAS is often 30-100% lower than platform-reported ROAS.
Incrementality tests are expensive to run well (they require holdout groups and statistical rigor) and take time to produce reliable results. But a single well-run incrementality test will change how you think about every budget decision you make for the next 12 months. It's the highest-leverage measurement investment in the stack.
How to build a reporting rhythm that drives decisions
Daily: MER vs. target (go/no-go signal). Weekly: nCAC by channel, new vs. returning customer split, top creative performance indicators. Monthly: CAC payback cohort analysis, LTV by acquisition cohort, channel mix efficiency review. Quarterly: incrementality test results, full attribution model calibration. This rhythm replaces ROAS as the operational metric and replaces gut feel as the strategic compass.
The Reporting Cadence Problem
Even when brands know the right metrics, they often have the wrong reporting cadence — checking daily dashboards for numbers that need weeks or months of data to mean anything.
Daily: MER is a valid daily check. If MER drops significantly in a single day, that's a signal to investigate. nCAC on a daily basis is usually noise — too few new customer conversions to be statistically meaningful.
Weekly: CAC trends are meaningful at weekly resolution. Creative performance metrics (thumb-stop, CTR, cost per initiated checkout) are valid weekly reads. Channel mix changes should be evaluated weekly but acted on cautiously.
Monthly: Cohort LTV analysis is only meaningful with enough customers in each cohort to be statistically significant. Most brands don't have enough volume to do meaningful 30-day cohort analysis at the weekly level.
The habit of checking daily ROAS and making decisions based on it is one of the most expensive measurement habits in DTC. It creates false urgency, leads to premature optimization, and trains the team to react to noise rather than signal. Slow down the decision cadence. Speed up the signal quality.
How ORCA Approaches This
ORCA is built around the reality that no single metric tells the full story. The platform is designed to surface MER, nCAC, and payback period as primary decision metrics — not ROAS — and to flag when different signals are diverging in ways that require investigation.
The specific thing ORCA does that matters for this problem: it reconciles platform-reported data against actual Shopify revenue on a daily basis, surfaces the gap between attributed and actual revenue, and builds a blended efficiency score that corrects for known platform over-attribution. The output is a cleaner signal than any single platform can provide — not because ORCA has solved attribution (nobody has), but because it's honest about what the data can and can't tell you.
If you're managing $150K+/month in paid spend and making budget decisions based on platform-reported ROAS, you're navigating with a compass that points slightly in the wrong direction. It works well enough when you're close to where you started. The further you scale, the more the error compounds — and the more expensive the eventual correction.
The fix is straightforward: build the real metrics stack, commit to it, and stop letting the platforms grade their own homework.
Scaling a DTC brand spending $150K+/month on paid?
We built this system for brands at your level. Tell us about your brand and we'll show you what this looks like for your specific situation.
Tell us about your brand →