RMN Member Newsletter · · 3 min read

Are Your Models Just Telling You What You Want to Hear?

Investors push for answers on the AI insurance impact.

Are Your Models Just Telling You What You Want to Hear?
Photo by José Martín Ramírez Carrasco / Unsplash

To gain full access to Risk Market News, become a premium subscriber.


THIS WEEK'S RMN

MARKETS · Hawaii Bill Would Enlist Captives to Plug Catastrophe Coverage Crisis
The state’s insurance regulator and captive insurance industry have flagged issues with the proposal. — Read →


MARKETS · Gallagher Puts $200M Price Tag on AI Disintermediation Risk
Gallagher says AI threatens about 1% to 2% of its revenues — and its clients who wanted to buy direct already could. — Read →


MARKETS · California GOP Wildfire Proposals Include a Private-Market Insurance Commissioner Mandate
California Republican legislative package pairs a private-market experience mandate for the state's Insurance Commissioner with tax credits for home hardening and backup power. — Read →


MODELS · AI's Next Frontier May Be a Single Model for All Epidemics
Researchers are making the case for an AI model that generalizes across pathogens and the pandemic bond’s parametric trigger failures illustrate exactly what's at stake if they succeed. — Read →


Risk models are guiding trillion-dollar decisions across finance and policy by telling executives what the world is like. But for Eric Winsberg, who has spent twenty-five years studying the intersection of computer simulation, climate modeling, and pandemic epistemology, that is a critical mistake — and the people who built the models often know it.

Winsberg, who is finishing a visiting global professorship at Cambridge before returning to the University of South Florida, is careful to point out that he is not critical of models. He is critical of how they get used. The distinction has become something of a signature: models, he argues, are like subway maps.

"A subway map is a great tool for navigating a subway system," he says in this week's Risky Science Podcast. "But if you use a subway map to figure out how far apart two places physically are, you'll get misleading information — because you're using it for a purpose it wasn't designed for."

The analogy lands differently when you apply it to the models that underwrite risk decisions across insurance and capital markets. Those models were built for specific purposes. The problem Winsberg documents is that their outputs routinely escape those purposes and get treated as ground truth for questions they were never designed to answer.

In an earlier paper, Winsberg lays out his broad thesis: clarifying what a model is for — and honestly assessing whether it is adequate for that purpose — are not merely technical tasks. They are moral ones.

"Modelling tasks are moral responsibilities," the he says in the paper, "duties which pertain to knowledge but are moral nonetheless." The moral weight comes from the downstream stakes: models used to set insurance rates, justify lockdowns, or price catastrophe bonds affect real people, often people who had no seat at the table when the parameter values were chosen.

That concern extends directly to catastrophe risk. Winsberg argues that climate models are weakest at precisely the three things North Americans care most about: wildfire, sea level rise, and tropical cyclones.

"It's not controversial among climate scientists," he said, "that climate models are not very trustworthy when it comes to telling us what the rest of the century will look like in terms of tropical cyclones. But a lot of people think they are. And there are institutional disincentives for getting the message out."

The mechanism he describes is the CMIP — the Coupled Model Intercomparison Project — which generates massive databases of output down to the postal code. That output was produced as a byproduct of intercomparison but gets consumed by downstream users as if it were designed for local-scale forecasting. It wasn't.

Winsberg is not a pessimist about models. He is a pessimist about the way models get deployed when the incentive is to project confidence rather than represent uncertainty — a distinction that matters for anyone pricing risk from model output.

"You're never going to really escape this," he said of value-laden choices in modeling. "Whenever you're making assumptions and simplifying things for tractability, you open yourself up to making choices. And when scientists make choices, choices are inevitably going to be value-laden."

👉 Listen to the full episode

Follow the ‘Risky Science Podcast’
Apple Podcasts | Spotify | Overcast

Read next