- Oregon Moves to Regulate Wildfire Models
New legislation would require insurers to file, justify, and disclose how wildfire risk models.
- Physical Risk, Not Broad ESG, Is Driving Model and Data Growth: MSCI
As ESG budgets soften in parts of the U.S., MSCI says clients are redirecting spending toward asset-level physical risk and geospatial analytics.
- Insurers Say Their Tech Exposure Is "Immaterial". Wall Street Isn't So Sure.
Wall Street insurers across earnings calls about software and private credit exposure, suggesting the buy side sees concentration risk the industry hasn't priced.
- Xcel Defends Wildfire Plans as Policymakers Push Back
After settling a $650 million claim tied to the Marshall Fire, the utility says that its power shutoff strategy is painful but necessary.
- New Mexico Moves to Inject Wildfire Model Governance in Homeowners Market
New Mexico lawmakers are moving to place wildfire catastrophe models under formal rate-filing oversight, requiring insurers to disclose their methodologies.
- Lawmakers Demand FEMA Release Flood Risk Models Data
Senate Republicans are demanding that the Federal Emergency Management Agency disclose the data and assumptions behind its Risk Rating 2.0, arguing the opaque methodology is driving premium spikes.
Who Prices AI Risk When There's No Loss History?

That's the question at the center of this week's Risky Science Podcast, where I sat down with Daniel Reti, co-founder and CEO of Exona Lab, a startup building AI risk quantification tools for the insurance industry.
Daniel, who came out of Bank of America's counterparty credit risk team, sees parallels between estimating what happens when a counterparty goes bust and estimating what happens when an AI system goes sideways.
The conversation covered a lot of ground, from underwriting questionnaires to catastrophe bonds to why agentic AI is fundamentally reshaping the tail risk profile, but the through-line was a simple and urgent thesis: AI risk is real, it's growing, and almost nobody is pricing it properly.
A new kind of risk, with no loss history
Reti starting point is that AI risk is unique.
"AI is such a fast-changing technology and it has rammed itself into all areas of life and business, but there is still very little quantitative loss history around the actual risk," he says. "However, we don't think that's representative, and we believe the risks are very real."
Exona Lab's approach is to build a risk engine that helps insurers underwrite AI exposure at the company level. They've developed questionnaires focused on engineering and governance signals:
- Does the company fine-tune models on custom data?
- Do employees have uncontrolled access to MCP servers?
- Is there a dedicated person accountable for AI security?
The analogy Reti draws is to early cyber insurance.
"I think around ten to fifteen years ago, when silent cyber was emerging and the cyber insurance industry was taking shape, people realized that a really informative question to ask was whether a company has a CISO on their team, because it means there's one person accountable for the risk. We think a similar function should exist for AI risk."
Why cat bonds, not traditional insurance?
One of the most provocative parts of the conversation was Reti's case for AI catastrophe bonds.
The logic is straightforward: traditional insurers don't want to put their balance sheets on the line for an event that has never happened and whose probability is genuinely unknown. But capital markets investors (ILS funds, hedge funds) have a completely different risk appetite.
"We could think about a structure that goes around the insurance companies and the insurance industry in general, and straight-up offers this high-return security to capital markets investors who have a completely different risk appetite than insurance companies," Reti explained. "And another fact is that these frontier labs are extremely well-capitalized. We've never seen so much capital in one place. They can really afford to pay a high premium to get coverage."
Daniel and co-author Gabriel Weil laid out the full mechanics of this proposal in their recent article Making Extreme AI Risk Tradeable , where they estimate that five major labs participating could support an initial collateral pool of $350–500 million — comparable to mid-sized natural catastrophe bonds.
Central to the structure is a Catastrophic Risk Index that would tie pricing to standardized safety assessments'; safer labs pay less, riskier labs pay more. The building blocks for this kind of index are already emerging. Daniel is a co-author in a recent paper which proposes a framework of AI Assurance Levels for rigorous third-party verification of frontier lab safety and security practices.
Capital markets as governance
When I asked whether the cost of capital could force safer model deployment faster than policy mandates, Reti was unequivocal.
"That's really exactly the point. AI is a fundamental, transformative new technology. But at the moment, we think the risks of AI are not sitting in the same place as the beneficiaries.
👉 Listen to the full episode