RMN Member Newsletter · · 3 min read

Risk Wrapped In a Cat Bond Inside an Enigma

Also, wildfire models under the state regulators' microscope and physical risk reality vs. ESG expectations

Risk Wrapped In a Cat Bond Inside an Enigma
Photo by Immo Wegmann / Unsplash
For RMN Subscribers


Who Prices AI Risk When There's No Loss History?

That's the question at the center of this week's Risky Science Podcast, where I sat down with Daniel Reti, co-founder and CEO of Exona Lab, a startup building AI risk quantification tools for the insurance industry.

Daniel, who came out of Bank of America's counterparty credit risk team, sees parallels between estimating what happens when a counterparty goes bust and estimating what happens when an AI system goes sideways.

The conversation covered a lot of ground, from underwriting questionnaires to catastrophe bonds to why agentic AI is fundamentally reshaping the tail risk profile, but the through-line was a simple and urgent thesis: AI risk is real, it's growing, and almost nobody is pricing it properly.

A new kind of risk, with no loss history

Reti starting point is that AI risk is unique.

"AI is such a fast-changing technology and it has rammed itself into all areas of life and business, but there is still very little quantitative loss history around the actual risk," he says. "However, we don't think that's representative, and we believe the risks are very real."

Exona Lab's approach is to build a risk engine that helps insurers underwrite AI exposure at the company level. They've developed questionnaires focused on engineering and governance signals:

The analogy Reti draws is to early cyber insurance.

"I think around ten to fifteen years ago, when silent cyber was emerging and the cyber insurance industry was taking shape, people realized that a really informative question to ask was whether a company has a CISO on their team, because it means there's one person accountable for the risk. We think a similar function should exist for AI risk."

Why cat bonds, not traditional insurance?

One of the most provocative parts of the conversation was Reti's case for AI catastrophe bonds.

The logic is straightforward: traditional insurers don't want to put their balance sheets on the line for an event that has never happened and whose probability is genuinely unknown. But capital markets investors (ILS funds, hedge funds) have a completely different risk appetite.

"We could think about a structure that goes around the insurance companies and the insurance industry in general, and straight-up offers this high-return security to capital markets investors who have a completely different risk appetite than insurance companies," Reti explained. "And another fact is that these frontier labs are extremely well-capitalized. We've never seen so much capital in one place. They can really afford to pay a high premium to get coverage."

Daniel and co-author Gabriel Weil laid out the full mechanics of this proposal in their recent article Making Extreme AI Risk Tradeable , where they estimate that five major labs participating could support an initial collateral pool of $350–500 million — comparable to mid-sized natural catastrophe bonds.

Central to the structure is a Catastrophic Risk Index that would tie pricing to standardized safety assessments'; safer labs pay less, riskier labs pay more. The building blocks for this kind of index are already emerging. Daniel is a co-author in a recent paper which proposes a framework of AI Assurance Levels for rigorous third-party verification of frontier lab safety and security practices.

Capital markets as governance

When I asked whether the cost of capital could force safer model deployment faster than policy mandates, Reti was unequivocal.

"That's really exactly the point. AI is a fundamental, transformative new technology. But at the moment, we think the risks of AI are not sitting in the same place as the beneficiaries.

👉 Listen to the full episode

Follow the ‘Risky Science Podcast’
Apple Podcasts | Spotify | Overcast

Read next