Attribution Analysis vs. Causal Inference
Explaining the past is not the same as predicting the future

Every investor, manager, or policymaker faces the same question: what really made the difference? Did our new strategy drive returns, or were we just in the right market cycle? Did more women on the board improve decision quality, or did stronger firms happen to have more inclusive boards?
To answer this, finance has long leaned on tools like attribution analysis and its close cousin, driver-based analysis. These approaches break down results into neat categories: how much of a portfolio’s return came from stock selection versus sector allocation, or how much of a change in sales came from pricing versus volume. They are tidy, intuitive, and useful.
The key difference is this: attribution explains the past. Causal inference explains the future, albeit indirectly. In causal inference, one asks a harder but more valuable question: what would have happened if we had done something differently? It’s called a counterfactual question and, in my opinion, marks the real frontier of decision-making with data.
Attribution analysis: a rearview mirror
Attribution analysis decomposes performance into buckets. In asset management, for example, performance attribution might split excess return into allocation effect (picking the right sectors) and selection effect (picking the right stocks within a sector). The goal is to explain where results came from.
Driver-based analysis is similar. It breaks down outcomes into “drivers” like volume, price, costs, and currency effects. CFO dashboards and insurance hackathons love this approach: it gives managers the comfort of a neat pie chart showing what moved the needle.
The strength of these methods is clarity. They show which levers moved historically. The weakness is that they stop there. They cannot tell you whether changing those levers in the future will actually deliver the same result.
The counterfactual question
Causal inference, on the other hand, starts from a different ambition: answering “what if?”
What if the company had not diversified into renewables?
What if interest rates had stayed flat?
What if women in top management increased from 10% to 30%?
These are not historical decompositions. They are counterfactuals: worlds we did not observe but want to reason about. The statistical machinery is more demanding — but the payoff is insight into interventions, not just narratives.
A simple example: women in management
Suppose you want to know whether more women in top management improves a company’s share price.
Attribution/driver analysis might show: companies with more women had higher returns, and 30% of the variation in share price is “driven” by this factor.
Causal inference asks: if you took the same company and changed the percentage of women in management, would its share price change?
The distinction seems subtle, but it’s huge. Attribution is a decomposition of past data; causal inference is a model of interventions.
Why attribution can mislead
The danger is when people treat attribution as if it were causal. If women in management is correlated with share price, attribution will happily assign it a “driver effect.” But maybe those companies also have higher profits, better governance, or simply operate in less volatile industries. The attribution will not disentangle that.
Causal inference, by contrast, forces us to map the relationships explicitly. We use Directed Acyclic Graphs (DAGs) to represent possible causal structures:
Women in management → better oversight → higher share price (causal channel).
Profitability → ability to hire inclusively → women in management (confounding).
By modeling these paths, we can control for confounders and isolate the true causal effect.
Regression vs. causal graphs
Many analysts already run regressions to “control for” things like profit, sales, or volatility. But regressions alone are not enough. Without a causal graph, you can end up controlling for the wrong variable (like a mediator instead of a confounder) and bias your estimates.
This is where tools like DoWhy help. They don’t invent new estimators; they provide a framework that links your causal assumptions (the DAG) to the estimation step. For example:
Total effect: Regress share price on women in management, controlling for size, profit, and sector.
Direct effect: Add volatility as a control, blocking the “risk” channel, and see what remains.
This workflow makes assumptions explicit and helps analysts avoid common pitfalls.
Small data, big clarity
At Wangari, we’ve noticed something interesting: small samples often work better for causal inference than for machine learning. In finance and sustainability, datasets are rarely “big” — a few dozen firms, a decade of history. Throwing a neural net at that is meaningless. But a well-drawn DAG plus regressions, with honest uncertainty intervals (bootstrap confidence intervals), can reveal clear insights.
This is another reason why attribution analysis remains so popular: it looks good on dashboards with limited data. But causal inference offers clarity even in small samples, because it explicitly models the counterfactual question.
Business implications: dashboards vs. decisions
So where does this leave us? Both approaches have their place.
Attribution analysis and driver decomposition are perfect for dashboards, reporting, and post-mortems. They help explain results in language managers understand.
Causal inference is for decisions. It tells you whether pulling a lever tomorrow is likely to move the outcome.
In short: attribution is backward-looking, causal inference is forward-looking. Confusing the two can mean mistaking storytelling for strategy.
The Bottom Line: Seeing Clearly
Markets, like minds, love stories. Attribution and driver analysis supply those stories in neat packages: “our return was driven by stock selection,” “our sales were driven by pricing.” They are comforting, but they can also be illusions.
Causal inference is less comforting but more honest. It asks: “if we did something differently, would the outcome change?” It is the discipline of seeing clearly, of resisting easy stories for actionable truths.
Explaining the past is not the same as predicting the future. For investors and managers navigating uncertainty, that difference may be the line between repeating mistakes and making better decisions.
Reads of the Week
In this provocative piece,
warns that the IPCC’s upcoming climate report (AR7) may abandon its longstanding scientific framework for assessing extreme weather, replacing it with a more politically charged approach called Extreme Event Attribution (EEA). This shift, he argues, prioritizes media appeal and climate litigation over rigorous, peer-reviewed science, potentially undermining public trust in climate assessments.If AI in fintech feels like a buzzword, this primer from
is your antidote. With clarity, wit, and even Oreos, he demystifies the real types of AI—like Expert Systems, Machine Learning, and Generative AI—and how they’re actually being used by companies like Ramp, Klarna, and Sage. For Wangari readers curious about how AI is reshaping financial services (and whether the AI in your banking app is truly intelligent), this is an insightful, jargon-free starting point.This article from
offers a clear-eyed look at how AI, especially language models like ChatGPT, is reshaping the world of financial engineering. Instead of replacing human expertise, these tools are becoming integral to decision-making, risk management, and even documentation on trading desks. For Wangari readers exploring the intersection of finance, tech, and innovation, this piece highlights how fluency in both data and language could define the next generation of financial careers.