RMN Member Newsletter · · 5 min read

Obamacare, But for Homeowners Insurance

Also, cat bonds are putting an increasing premium on wildfire risk.

Obamacare, But for Homeowners Insurance
For RMN Subscribers


AI Is "Locked and Loaded" For Underwriting and Why the Industry Hasn't Pulled The Trigger

That's the provocation at the center of this week's Risky Science Podcast, where I sat down with Daniel Schwarcz, a law professor at the University of Minnesota Law School who has spent two decades studying insurance from the inside.

The conversation ranged widely (from AI in claims handling to catastrophe modeling to the shadow insurance problem) but the thread running through all of it was a simple and uncomfortable observation: the technology is ready, the regulators aren't, and the industry is stuck in between.

A capability gap, not a technology gap

Schwarcz's says that the debate about whether AI can do meaningful work in insurance is largely over.

"It's been clear for some time that machine learning techniques are much better at predicting than ordinary human techniques," he says. Fraud detection, claims adjudication, correspondence drafting, underwriting — the tools exist and, in many cases, have existed for years.

The gap isn't capability. It's adoption.

And the reason for the gap, Schwarcz argues, is that insurers face a regulatory environment that is fragmented, vague, and in some cases actively hostile to AI deployment. Fifty different state regimes, layered bad faith exposure, and a near-total absence of clear standards for what constitutes unfair discrimination when an algorithm is making the call.

"States have largely fallen back on platitudes," he says. "They don't provide the clear, unambiguous standards that insurers can actually use."

The result is that firms willing to move aggressively on AI face not just regulatory risk but significant legal bills navigating it. Many of the larger players, Schwarcz observes, are taking a more cautious approach.

Where the black box gets dangerous

Not all AI risk is the same, and Schwarcz is careful about where he draws the lines.

On catastrophe modeling and property risk assessment, he's broadly supportive, even enthusiastic. Better, more granular parcel-level risk data is genuinely useful. It promotes competition among insurers who are no longer all dependent on the same one or two dominant cat models. And accurate pricing, he argues, is socially important: it sends signals about climate risk that drive mitigation behavior.

Life insurance is a different story.

Race is legally prohibited as an underwriting factor, but it remains statistically predictive of longevity even after controlling for other variables. A general-purpose AI model trained on broad data will likely proxy for race in ways the insurer deploying it may not even recognize. "There are a million different ways to proxy for race," Schwarcz says, "and you're not even going to be aware of it if you're just throwing as much data as you can at the model."

The most publicly visible legal battles over AI in insurance have played out in health, where major insurers face suits over algorithmic claims handling. The core allegations: biased training data, failure to account for individual circumstances, and human reviewers approving claims in batches of hundreds without meaningful oversight. Schwarcz frames these not just as legal risks but as genuine policy failures — the kind that disclosure requirements, however well-intentioned, are ill-equipped to fix.

"Disclosure is often used because it's relatively cheap," he says dryly. "And it usually doesn't do much, which is sometimes exactly what players in the industry want."

The machine fiduciary problem

One of the more provocative threads in Schwarcz's recent research is the argument that generative AI used in insurance sales should be held to something like a fiduciary standard.

The concern is straightforward: if there's one thing generative AI is demonstrably good at, it's persuasion. Personalized, scalable, individual-level persuasion. Without appropriate guardrails, AI-powered sales tools can replicate — and dramatically scale — the same mis-selling and commission-maximizing behaviors we've always worried about with human agents, particularly in life insurance where unsuitable products and commission-driven sales are already a documented problem.

Schwarcz's counterintuitive point is that the benefit side of this equation is actually quite weak. For most ordinary consumers, a simple rules-based algorithm is sufficient to identify what kind of insurance they need. The case for deploying generative AI in sales — rather than more transparent rules-based tools — rests on a benefit-risk ratio that doesn't hold up.

The homeowners insurance problem and the Obamacare answer

The conversation's most provocative turn comes when Schwarcz defends an argument he's made in print: that the Affordable Care Act offers a useful template for reforming homeowners insurance markets.

He's clear-eyed about the branding challenge. But his case is substantive. The homeowners insurance crisis, he argues, is fundamentally a market structure and incentive alignment problem, not primarily an AI problem or even a climate modeling problem. Accurate premium pricing is essential because it signals risk and drives mitigation, but the current system features cross-subsidies that are poorly targeted, often benefiting wealthier homeowners, and that erode the signaling effect of premiums.

His preferred solutions echo ACA mechanics: insurance exchanges to facilitate comparison shopping and real competition, a move away from prior approval rate regulation toward managed competition, and direct income-based subsidies to address affordability rather than hidden cross-subsidies embedded in the rate structure. He notes, with some relish, that Obamacare's origins were actually as a market-facilitation idea and that in some ways the ACA is less regulatory than the prior approval systems many states operate for P&C insurance.

"I'd prefer a model where the government helps facilitate and structure the market, but then allows it to operate," he says.

Is a government backstop for the homeowners market inevitable? Schwarcz pushes back on the fatalism. P&C insurance has lower barriers to entry and more competitors than health insurance. The problem is real but manageable — as long as the industry can develop a longer time horizon than its current annual policy cycle allows.

Where the industry goes from here

Schwarcz closes with a measured forecast: relative to most sectors, insurance will be less disrupted by AI over the next five years. The regulatory complexity, legal exposure, and conservative risk culture of established players provide meaningful drag. But he cautions against reading that as stability.

"The value proposition for AI has always been unusually clear in insurance," he says. "It's a prediction-driven industry. And yet we still have significant reluctance to fully embrace it. That tells you something about how powerful those other forces are."

He expects the adoption gap to narrow, not because regulators solve the hard problems, but because the economic pressure to do so keeps building.

👉 Listen to the full episode

Follow the ‘Risky Science Podcast’
Apple Podcasts | Spotify | Overcast

Read next