I've been building MMM models since 2016, first at an adtech startup we built specifically around media mix modeling, then at Rockerbox where I led data science and built the MMM product. The most consistent thing I've seen across all of it: brands invest in an MMM, get a model back, and change almost nothing. Not because the output is wrong. Because nobody helped them understand what to actually do with it.
That's a gap between expectations and methodology. MMM is a genuinely rigorous tool — it does things platform attribution simply can't. But it also has hard limits, and mistaking it for something it's not is how you end up with an expensive report sitting in a shared drive.
What MMM is actually doing
The math is regression. You're looking at how variation in spend across channels correlates with variation in revenue over time, controlling for everything else you can measure: seasonality, promotions, price changes, macro trends. The model learns the historical relationship between spending and outcomes for each channel, and estimates what portion of revenue each channel contributed.
The output is a set of channel-level ROI estimates: for every dollar spent on Meta, you got approximately X dollars in revenue contribution; for every dollar on YouTube, approximately Y. These estimates aren't transaction-level. They're aggregated statistical approximations built on regression models. They're also, notably, backward-looking. MMM tells you what happened over the period it was trained on.
MMM is not a real-time optimization tool. It's a strategic measurement layer. If you're trying to use it to justify daily bid decisions or weekly budget shifts, you're using the wrong tool for the job.
What MMM is genuinely good at
Whether your channels are actually driving revenue. This is the main use case. MMM can tell you whether brand search spend is generating demand or just capturing intent that would have converted anyway. We ran this for an eight-figure DTC brand and found brand search had a median MMM ROI of 2.35x higher than platform ROAS suggested, with CPMs 88% lower than Meta. The right call wasn't to scale brand search. It was to protect it and put more behind the upper-funnel channels creating that branded intent in the first place. Read the full breakdown in our brand search MMM case study.
Long-run channel efficiency. Platform attribution windows are short. MMM can model the revenue contribution of a channel over a longer decay curve, capturing effects that happen weeks after initial exposure. This is particularly valuable for upper-funnel channels like CTV, YouTube, and display, which often show poor short-term attribution but meaningful long-term contribution when modeled properly. See how we applied this to YouTube spend in our YouTube MMM case study.
Saturation curves. A good MMM will show you the diminishing returns curve for each channel: where additional spend starts yielding less than proportional revenue. This tells you whether you're over- or under-invested in a given channel relative to its efficiency curve.
Macro-level budget reallocation. When you're deciding whether to shift 15% of budget from Meta to CTV or from paid social to Google Shopping, MMM gives you a data-backed basis for that conversation. It won't give you certainty, but it will give you better-informed hypothesis than gut instinct or platform reporting alone.
What MMM is not good at
Granular creative decisions. MMM works at the channel level, not the ad level. It cannot tell you whether your UGC hook outperformed your brand testimonial, or whether your :15 drove more incremental revenue than your :30. Creative testing requires different methodology (controlled A/B tests, platform experiments) and can't be outsourced to the model.
Short-term optimization. If you're trying to understand whether to raise bids on Tuesday or which audience segment to prioritize this week, MMM is the wrong tool. The model's time horizon is weeks to months, not days. Using it to justify short-term decisions is misapplying it.
New channels with limited spend history. MMM needs variation in your spend data to identify signal. If you've never run CTV, or you ran it for only 30 days, the model doesn't have enough data to accurately estimate its contribution. For new channels, controlled testing (geo holdouts, brand lift studies) is more reliable than MMM until you've built enough spend history.
Telling you what to do. This is the most important limitation. MMM tells you what happened. It doesn't tell you what to do next. The translation from model output to business decision requires human judgment about what's changed in the market, what constraints exist on budget, what the brand's strategic priorities are. Treating MMM as a prescription rather than an input is where brands make mistakes with it.
How to actually use the output
Run MMM as one input among several. Cross-reference the channel ROI estimates against your holdout test results and platform-reported data. Where all three point in the same direction, you have conviction. Where they disagree, you have a question worth investigating before making a major budget shift.
Use the saturation curves to identify the channels where you're most likely over-invested relative to the model's efficiency estimates. These are the first places to look when you need to reallocate budget, because the model is telling you you're past the point of optimal returns.
At LuckyRev we revisit the MMM quarterly. The model is trained on historical data, and your channel mix is always shifting, so the output needs to keep up with how the business is actually spending.
One layer of your measurement stack, not the whole stack
MMM is a directional tool, not a single source of truth. It's excellent for channel-level efficiency benchmarking, saturation analysis, and budget allocation conversations at the portfolio level. The brands that get the most from it treat it as one layer of their measurement stack, and they actually change decisions based on what it tells them. That second part is rarer than it should be.
More from The Brief
Want to actually use your MMM output?
I built MMM products for years before co-founding LuckyRev. We run LuckyProphet for brands who want the model and someone who knows how to turn it into decisions.
Explore LuckyTools →