Insurance, reinsurance, and brokerage firms continue to push the narrative that they are no longer merely sources or conduits for risk capital repriced on an annual cycle. Instead, they now position themselves as full-service consultants on climate change risks across the enterprise.
Of course, that argument also conveniently supports the growth of recurring revenue streams built on risk modeling and consulting services.
Dean Klisura, President and CEO of Guy Carpenter, underscored this point during Marsh McLennan’s earnings call this week, stating that the firm’s “analytics platform is what differentiates us.” He added that clients should focus less on negotiating the next renewal or quarterly claims process and more on managing volatility and capital efficiency in the face of long-term climate risks.
I think the greatest application for AI in our sector will be managing the impact of climate change. Our clients want to know and they want to help them – want us to help them manage catastrophe risk moving forward, looking at their portfolios, providing advice, helping them model future climate change impact, building proprietary models using AI. That will be the true differentiator for Guy Carpenter as we try to support our clients.
But the industry’s ambition to become a full-service climate risk provider hinges on a key assumption: that its data and models have been retrofitted—climate-conditioned—to account for broader and longer-term climate risks.
As we explore in this week’s podcast with a researcher trained in econometrics, insurance risk models may still lack the robust economic context and high-quality data required to truly meet that challenge.
Support the Risky Science Podcast by becoming a Risk Market News paid subscriber.

Tricky Insurance Model Math
In the latest episode of the Risky Science Podcast Michiel Ingels, PhD Candidate at Vrije Universiteit Amsterdam breaks down the evolving landscape of climate risk insurance modeling and what it will take to make those models truly fit for purpose.
Ingels explains how major coverage and methodology gaps persist in most insurance models:
Windstorm was the most studied peril, and flood is in second place. But if you compare that to the actual losses in the last 30 years, you see the discrepancy.
Perhaps most significantly, the review found that fewer than half of models were truly forward-looking:
About 47% of the studies were prospective, meaning they used climate scenarios or socioeconomic pathways. So the majority were still reactive, using historical data only.
One of the key technical hurdles, according to Ingels, is translating complex climate scenarios into actionable insurance risk metrics. Each stage, from emissions projections through climate impacts, hazard modeling, and ultimately financial risk assessment, adds layers of uncertainty and demands collaboration across multiple disciplines.
Ingels also highlights the critical role insurance can play in promoting climate resilience, noting that pricing risk accurate can help steer markets toward more adaptive behavior. He also calls for new collaborative frameworks that bring together scientists, insurers, and model developers to design transparent, forward-looking models that serve both public and private sector needs.
He sees reasons for optimism, citing increased regulatory interest and the growing acknowledgment of model limitations:
“We’re at a tipping point. There’s an awareness that models need to evolve if they’re going to remain relevant in the face of accelerating climate impacts.”
📺 Listen to the full episode here
Show Notes
🔬 Guest
Michiel W. Ingels –Vrije Universiteit Amsterdam
The State of the Art and Future of Climate Risk Insurance Modeling – Annals of the NY Academy of Sciences (2024)
📊 Podcast Topics
- Gaps in catastrophe modeling: windstorm vs. flood
- Climate-aligned pricing and adaptation incentives
- Public-private modeling partnerships
- Forward-looking scenario integration
- Modeling compound and cascading risks
- Underinsurance and the protection gap