Wangari Digest
Wangari Digest
When Models Become Mirrors—And What to Do About It
0:00
-15:55

When Models Become Mirrors—And What to Do About It

Has financial modeling become idiosyncratic?

Hey everyone, it’s Ari.

This week on Tuesday, I posed a slightly unsettling question:
Are we modeling the world—or just modeling ourselves?

And today I want to sit with that question. Not to answer it once and for all, but to feel out its edges—to ask what it means for how we work, how we model, and how we make decisions in a world shaped by transition.

Let’s start here:
Most financial models are built to predict. They assume some version of the future can be known—if only we have the right inputs, the right historical data, the right estimation technique.

But what if the future isn’t just hard to predict…
What if it’s fundamentally different from the past?

What if the world is shifting underneath us—climate regimes, social norms, regulatory thresholds—so fast and so unpredictably that the whole predictive paradigm becomes less useful?

In that case, we might still be building beautiful models—but we’re modeling ghosts.
Reflections of our own assumptions. Projections of what we already believe.

A mirror. Not a window.


What’s Really Inside a Model?

We like to think of models as objective. But really, a model is a vessel—
One that carries our beliefs about what matters, what doesn’t, what causes what, and what can be safely ignored.

If your model leaves out social tipping points, you're saying:

“These don't matter—or at least, not enough to model.”

If you assume linear relationships between sustainability factors and financial outcomes, you're saying:

“I expect the world to behave smoothly, even when it’s under pressure.”

We don’t always mean to say these things explicitly. But our models say them for us.


The Danger of Precision Without Reflection

I’ve said this before, but it bears repeating:
The more precisely you estimate the wrong thing, the more confidently you walk in the wrong direction.

It’s one of the quiet tragedies of our industry—
That we often equate “clean” models with “correct” ones.

But if the structure of the world is changing—if climate shocks, public sentiment, or policy regimes introduce non-linear breaks—then clean models that ignore that messiness may be worse than no models at all.


The Analyst as Explorer, Not Just Estimator

So what’s the alternative?

I believe the next evolution in financial modeling won’t just be smarter algorithms or bigger datasets.
It will be a shift in role.

From estimator to explorer.
From predictor to possibility-mapper.

We need models that don’t just say “this is the number,” but that ask:

“What could happen if this fragile assumption broke?”
“What paths might this system follow under pressure?”
“What are we missing because we didn’t think to look?”

This isn’t about rejecting rigor.
It’s about expanding it to include uncertainty, feedback, and even imagination.


Five Practices for More Honest Modeling

I want to offer you five quiet practices that I’ve found helpful in making my own models more window than mirror.

You don’t have to adopt them all. But consider them a kind of compass.

  1. Name your assumptions.
    Literally. Write them out. “This assumes a stable carbon tax.” “This assumes no geopolitical disruption.” Once they’re visible, you can question them.

  2. Draw before you code.
    Sketch a causal diagram before you touch the keyboard. Ask: “What drives what?” If you don’t know, that’s the real work.

  3. Model behavior, not just metrics.
    People matter—investors, regulators, consumers. Their responses to sustainability shifts may define the path your model’s trying to follow.

  4. Ask counterfactual questions.
    “What if emissions pricing spikes tomorrow?” “What if this company loses its supply chain overnight?” Counterfactuals show you the edge of your map.

  5. Design for breakage.
    Good models aren’t just robust—they’re honest when they fail. Build in indicators for when assumptions no longer hold. Build models that can say: “I don’t know.”


The Bottom Line: A Question to Carry

As we close this week’s deep dive, I want to offer a question—not for your next project, but maybe for the way you see your work overall:

What would it look like to build models that are not just technically strong…
…but morally and philosophically aligned with the kind of world we want to navigate?

Because we don’t just model the world.
We shape it.

Thanks for listening.

Talk soon.
— Ari

Discussion about this episode

User's avatar