Essay

Why Attribution Models Lie

Attribution gives precise answers to the wrong questions. Here is what to do instead.

Attribution models promise to answer the question every marketer wants answered: which marketing activities drive results? They assign credit for conversions to touchpoints in the customer journey. They produce numbers, percentages, charts. They look precise.

They lie.

Not intentionally. But the precision is false. The certainty is manufactured. The answers, while mathematically consistent, often mislead more than they inform.

What Attribution Actually Does

Attribution models take a simple fact, a conversion happened, and apply a rule to distribute credit across touchpoints. Last click gives all credit to the final touchpoint. First click gives all credit to the initial touchpoint. Linear distributes credit evenly. Time decay gives more credit to recent touchpoints.

Each model produces different answers from the same data. None of them answers the question "what caused the conversion." They answer "given our rule, how should we allocate credit." The rule is arbitrary. The allocation follows the rule. The connection to causation is assumed, not demonstrated.

The Measurement Problem

Attribution can only credit what it can see. It cannot see:

Offline touchpoints. Word of mouth recommendations, seeing a truck drive by, overhearing a conversation. These influence purchases but leave no digital trail.

Cross-device behavior. Someone researches on their phone, converts on their laptop. Without sophisticated tracking, these appear as separate journeys.

Brand effects. Years of brand building that make someone choose you when they finally enter the market. Attribution credits the proximate touchpoint, not the accumulated brand equity.

Category creation. Content that makes someone realize they have a problem worth solving. The conversion happens months later and credits whatever touchpoint preceded it.

The model attributes credit to what it can measure. What it cannot measure gets zero credit. This biases results toward trackable, recent, digital touchpoints.

The Last-Click Problem

Last-click attribution is the most common and the most misleading. It gives 100% credit to the final touchpoint before conversion.

Consider someone who sees your brand mentioned in an article, sees a social media post a week later, searches your brand name and clicks a search ad, then converts. Last-click credits the search ad. But would they have searched if not for the article and social post?

Last-click systematically undervalues awareness and consideration activities while overvaluing demand capture activities. Google as demand capture looks great in last-click attribution because it captures demand that was created elsewhere.

This creates perverse incentives. Budget shifts to demand capture, starving the awareness activities that create demand to capture. Eventually, there is less demand to capture, and even the capture channels underperform.

The Incrementality Gap

Attribution tells you what touchpoint preceded the conversion. It does not tell you whether that touchpoint was necessary.

Someone who was going to buy anyway and happened to click an ad gets attributed to the ad. The ad gets credit for a sale it did not cause. Multiply this across thousands of conversions, and attribution tells you your advertising is more effective than it actually is.

True advertising effectiveness requires incrementality measurement: what sales happened because of the advertising that would not have happened without it? This is a harder question than attribution answers.

The Gaming Problem

Metrics drift from reality when people optimize against them. If attribution determines budget allocation, people will optimize for attribution.

Want more last-click credit? Run brand search ads that appear when people search your brand name. You will "capture" conversions from people who were already going to convert. Attribution credits the ads. Budget flows to a channel that did not actually drive incremental sales.

The model becomes the target. Gaming the model becomes the work. The connection to actual business outcomes loosens.

What to Do Instead

If attribution models lie, how should marketers make decisions?

Accept uncertainty. You will never know exactly which activities drove which conversions. Make peace with this. Decisions under uncertainty are still possible.

Use multiple views. Look at attribution data alongside other evidence: controlled experiments, geographic lift studies, time series analysis, marketing mix modeling. No single method is correct, but triangulating across methods reduces error.

Focus on the controllable. You can measure and optimize conversion rate, follow-up speed, landing page performance. These are closer to your operations and less subject to attribution distortion.

Run experiments. Turn things off and measure what happens. Geographic tests, time-based tests, incrementality studies. These are harder than looking at attribution reports but more reliable.

Think about system health. Is the whole demand capture system working? Is revenue growing relative to marketing investment? These aggregate measures are more meaningful than touchpoint-level attribution.

The Operator Perspective

From an operator standpoint, the problem with attribution is not that it provides no information. It is that it provides false confidence. People make decisions as if the numbers are true when the numbers are artifacts of arbitrary models.

Systems scale judgment. But if the judgment being scaled is based on misleading metrics, the system scales bad decisions. Attribution that misleads leads to resource allocation that underperforms.

The best operators hold attribution loosely. They use it as one input among many. They remain skeptical of precision. They test and verify rather than assume the model is correct.

Related Reading