Most DTC brands are optimizing against a number that's wrong. Not slightly off. Wrong in a way that systematically rewards the wrong channels, punishes the right ones, and compounds over time into a media mix that looks efficient but isn't building anything.
Last-click attribution is the culprit. It's the default setting in Google Analytics, the implied logic behind most channel reporting, and the framework most finance teams use to evaluate paid media ROI. It also happens to be one of the most misleading ways to understand how your marketing actually works.
What last-click actually measures
Last-click attribution assigns 100% of the conversion credit to the final touchpoint before a purchase. If a customer saw your CTV ad, clicked a YouTube video, read a review, then converted after clicking a branded Google search ad, last-click credits Google Search. Everything upstream gets nothing.
This isn't just a measurement quirk. It actively distorts decisions. When you're evaluating channel performance monthly and last-click is your framework, you will cut the channels that don't get the last click. Over time, that means cutting the channels doing the heaviest lifting on awareness and consideration.
The channel that closes the most last-click conversions isn't necessarily the channel creating demand. It's often just the channel positioned at the end of a journey that other channels built.
Who benefits most from last-click (and why that's a problem)
Brand search captures a disproportionate share of last-click credit. When someone searches your brand name and converts, that Google campaign shows an incredible ROAS. But did that paid click actually drive the purchase, or did the customer already have intent? In most cases, they were going to buy regardless. You paid to intercept a conversion that was already happening.
Meta also benefits heavily from last-click, especially with its default 7-day click attribution window. Someone clicks an ad, converts a week later through direct traffic or organic search, and Meta reports the conversion. It's not wrong, exactly. But it creates a version of reality where Meta looks better than it sometimes is, and brand search looks incredible despite often just being a tax on intent you already created.
Meanwhile, CTV, influencer, and upper-funnel display get almost nothing. These channels work by building awareness, priming the purchase, shifting brand perception. None of that shows up in last-click because there's nothing to click. They exist before the funnel, not at the end of it.
What last-click misses: a few specific examples
CTV and streaming ads. A customer watches your 30-second ad on Hulu. Three days later they search your brand name and buy. Last-click credits the branded search campaign. CTV gets nothing. You conclude CTV doesn't work and cut it. Brand search ROAS looks amazing. You increase brand search budget. You've now optimized yourself into a closed loop that doesn't generate new demand.
Influencer and content. Someone discovers your brand through a creator they follow. They watch the video, go to your site, leave. A week later they see a Meta retargeting ad and convert. Last-click credits Meta. Influencer gets no credit despite being the touchpoint that generated brand awareness in the first place. You cut influencer spend, demand softens, and you wonder why Meta efficiency is declining.
Brand search itself. This is the most counterintuitive one. High brand search ROAS often means your brand is healthy and other channels are working. But last-click reporting makes it look like brand search is the engine. Brands sometimes respond by increasing brand search budget and cutting the upper-funnel work that was generating the brand intent. ROAS tanks. They conclude the market got harder rather than recognizing they cut the channel generating demand.
What better attribution looks like
There's no single perfect solution, but a layered approach gets you close enough to act on.
Holdout testing is the most direct answer. The basic version: split your markets into two comparable groups, run the channel in one and turn it off in the other, then compare revenue or conversion rate trends over the test period. If the group without the channel underperforms meaningfully, the channel is contributing real demand. If there's no difference, it's likely capturing existing intent rather than generating new. Harder to run than checking a dashboard, but it's the only way to actually understand causation instead of just correlation.
Media mix modeling (MMM) uses statistical regression to allocate revenue contribution across channels based on spend fluctuations over time. It's not perfect at the short-term or creative level, but it's excellent at revealing channel-level contribution over a longer horizon. When we've run MMM for brands, the outputs often look very different from their last-click reports. Brand search consistently shows one of the highest MMM ROIs, but the implication isn't to scale it. It's to protect the upper-funnel channels generating that branded intent.
Blended CAC and new customer ratio are underrated as leading indicators. If your last-click ROAS is holding steady but your new customer acquisition rate is declining month-over-month, something is wrong upstream. You're becoming more efficient at converting existing demand while failing to generate new demand. Blended CAC tells the story that platform ROAS won't. Our Performance Tracker maps this daily across all channels so you're not manually stitching it together.
The goal isn't to find a perfect attribution model. It's to triangulate across several imperfect ones so you're not making budget decisions based on a single misleading number.
What to actually do with this
Start by identifying which channels in your mix have near-zero last-click credit. That's not evidence they're not working. It's evidence you need a better framework to evaluate them.
Then look at your new customer ratio over the last 12 months. Is it stable or declining? If it's declining while revenue holds, you're living off existing demand, not generating new. That's a sign your upper-funnel is underfunded relative to what last-click reporting suggests.
If you've never run a holdout test, run one. Pick your highest-spend channel and design a simple holdout. The results are usually clarifying, sometimes surprising, and always more useful than another month of last-click dashboard staring.
Attribution isn't a reporting decision, it's a budget allocation one
Platforms have a clear interest in making their own channels look as effective as possible. Last-click hands them that story. The brands we work with that grow most consistently are the ones treating platform ROAS as one signal, not the answer, and cross-referencing it with blended CAC, new customer rate, and the occasional holdout test to check whether what the dashboard says is actually what's happening. For a concrete example of how better attribution leads to better forecasting, see the Birdwell revenue forecasting case study.
More from The Brief
Not sure if your attribution model is lying to you?
We audit media mixes and measurement frameworks for DTC brands regularly. If your ROAS looks great but growth has stalled, there's usually something worth looking at.