Embracing The Uncertainty of Catastrophe Models

Captial providers will need to recruit more in-house science talent as “off the self” models continue to fall short.

Embracing The Uncertainty of Catastrophe Models
Photo credit: Cato Lein

Thanks for being free subscriber to RMN.

As I spin up a daily newsletter for paid subscribers and an updated site, this week's Sunday post is focused on a discussion regarding the changing role of catastrophe models.

Consider becoming a paid subscriber to RMN for access to all content.


Catastrophe Q&A

Joe Roussos

Institute for Future Studies

The 2022 reinsurance renewal season revealed that capital providers are questioning the efficacy of cat models more than ever as they continue to exceed their catastrophe budgets outside of the modeled expectations.

Joe Roussos, Ph.D., has studied insurer and reinsurer use of catastrophe models through his work at Institute for Futures Studies and the London School of Economics. A recently published research paper he co-authored suggests a “confidence approach” to dealing the models and their attendant uncertainty.

Speaking in an interview with Risk Market News form his office in Stockholm, Roussos discussed how capital providers can change their approach to models and how getting “off the shelf” answers through modeling may need to change.

This interview has been edited for length and clarity.

Risk Market News: Could you describe you came across the issue of catastrophe model use in the insurance and reinsurance industry?

Joe Roussos, Ph.D.: Our team was interested in what happens when scientists are uncertain about the underlying science and the right way to model risk.

We start with a situation where there are multiple models. While what's different between them can vary, there are times when the models are significantly different. They disagree on some basic scientific questions, not just your favorite statistical technique.

The challenge is, how do you use those models to make decisions?

This is a situation that arises in catastrophe insurance. And it's often accompanied a quite reasonable feeling that this reflects a state of uncertainty amongst the scientists. That uncertainty is not going to be well captured by any individual model.

Underwriters working with hurricanes, for instance, would report to me that they felt like there was a lot of uncertainty that wasn't in the models. That required them to do additional work, attempt to add in uncertainty premiums using juristic methods. Often based on their own experience.

That was our starting point: how can we do better at making use of the information that's in this collection of models? And our thesis is basically that the spread of the model outputs is itself a valuable information, which the decision maker can and should make use of. But what they're lacking is tools to make use of it in a structured way. The “confidence approach” is a structured approach to managing the kind of uncertainty that you face when you have a whole collection of models.

What the confidence approach does is take the set of model outputs and it constructs a whole collection of ranges. These different ranges that we construct are nested. There'll be one that's very specific. Maybe it'll be a single point. Then there'll be a wider range, a wider one, wider, wider, and so on.

But that sounds like too much. How are you meant to work with, not just a range, but now a whole nested set of ranges? And the answer is which range you use depends on how much is at stake in the decision you're making and how cautious you are.

The goal is to allow decision makers, insurers in this case, to calibrate their response to uncertainty based on what the decision they're facing. The thought is if you were to run with this approach, you'd use the models slightly differently, depending on the kind of decision maker you were.

If there was a lot at stake, or if it was a very large portfolio, you would react differently to the uncertainty. And similarly, it would depend on your degree of caution. It's meant to allow insurers to act in a way that balances across a whole portfolio, how they feel from contract to contract. I mean, perhaps not literally that specifically, but give them some tools to react sensitively.

All the answers that it generates are from the science. It's just a question of which models you're using for which decision. There's no subjectivity in that sense. It's not like when underwriters use their personal experience to fudge a rate up or down because they feel like there's some unpriced uncertainty.

But what it does do is result in say a single insurer over time using different rates because they're basically relying on different scientific answers, depending on what's at stake in the decision. I also said caution, but the way that I think of caution as a decision theorist usually assumes that an individual just is a certain amount of cautious in this. But that might not be true in an institutional setting. It could also vary in that way.

RMN: Since it seems a common approach in the industry today is to average, or blend, the models, can describe how this confidence approach is different from current industry practice.

Roussos: If you you are a big reinsures, like Swiss Re, you have an in-house modeling team, and your own set of scientists who work for them. Then they'll build their own models and just try to do their best. They have to navigate the uncertainty that the scientists have. They basically have to make calls on active, scientific disagreements and just put together what they think is the best single model.

That has some obvious downsides. These are active scientific disagreements. Forcing your in-house team to produce one model is always going to poorly represent the actual uncertainty out there.

The other approach is to average third-party models. If you are buying models, like from AIR or RMS, you can buy a bunch of them and then you can average them. A few people I've spoken to at different companies say that they effectively do that. And maybe they're averaging those together with their own internal model. RMS itself has a group of 13 models that they average to produce their medium term rate for hurricane risk.

What you're doing when you're averaging is you're trying to decide how good each individual model is. You'll run it through some predictive tests. You'll also do a bunch of other exercises to validate the model and then you'll assign it a score. And that score will become a weight. The best scoring model will count for more in the average. The worst scoring model will count for less. But in the end, you just blend them all together and you get out a single rate.

One of the issues that as we see is that there's a lot of disagreement on how to do the averaging. There are different ways of doing these scoring tests. A lot of the choices that you're forced to make feel arbitrary. And if you did it a different way, or you used a different scoring rule, you would get a different answer.

And so that's introducing a new kind of uncertainty. You're trying to deal with this scientific uncertainty about how hurricanes form. And now you're introducing a new kind of uncertainty like which scoring method should we use? Or what's exactly the right statistical test? That seems like you're layering in a different kind of uncertainty. That's an unfortunate aspect of doing it that way.

But it also collapses everything down to this one number that you're then going to use. There's a lot of pressure to produce one number. If you're a cat modeling company, your client just wants one answer. They don't want to hear about 13 models. They want a price on insurance products. And similarly, if you're the in-house scientist, people are already constantly frustrated with you for trying to offer of them lots of nuance and lots of uncertainty. You can't be showing up with 10 different models. They just want one number.

There's a lot of institutional pressure to go in that direction. But it does mean that you're collapsing down this big disagreement that's out there in the real world to one number. And that hides a lot of the uncertainty.

That might be bad in an institutional sense. It might mean that your underwriters think that the situation is clearer than it really is. They might take away from a single number that the rate really is whatever the number is, as opposed to that being a somewhat arbitrary and negotiated hedge between a wide variety of scientific positions.

RMN:What about different catastrophe risks, like windstorm and earthquakes? Would this approach work across multiple perils.

Roussos: . If an insurer is working with multiple windstorm models, then it could work there. And windstorm is a case where maybe it could be applied.

The scientific dynamics are different for different perils. What earthquake people are uncertain about manifests in a different way than they have in the hurricane community.

The reason we started with hurricane I guess, is because one of the things you see is a proliferation of models, because there is quite a sophisticated partly physics simulation, partly statistical modeling tradition that's linked up very much with climate science. A hurricane is definitely going to be the easiest case, but there's nothing in principle that stops it from being applied else.

I've talked more recently about how it could be used in managing COVID risk. If you're planning hospital supplies and you have a wide range of estimates of the case load in the next three months, it could be used in that case. There's nothing hurricane specific about it.

RMN: What's your biggest concern when it comes to the insurance industry's implementation of catastrophe models?

Roussos: I think the big challenge on the horizon is how to deal with climate change and its influence on natural catastrophes. There's obviously a lot of talk in media about increasing losses from, well, wildfire in particular. The last year's flooding also being due to climate change. But that remains really difficult to quantify. And the scientific community is very interested in, but also really torn about how to do post hoc event attribution and work out how much climate change is driving things.

I think we're at a really difficult time for insurers in terms of how to get ahead of that and start thinking smartly about climate change. I think that we may be entering a period in which the scientific uncertainty about how climate change is affecting those perils now is so significant — the disagreement and uncertainty is so large —that there aren't really good answers on how to model the associated risks or the additional fraction of risk due to changing climate.

In that kind of circumstance, I think it's important to guard against the allure of models to beware the attractiveness of false precision. People like models because they're useful tools. They give you an answer. And soon, I'm sure there will be new modules, new updated models for every peril you can think of that offer a climate risk factor.

I don't think we know the answer scientifically and it's a period in which we have to be careful about how much we rely on models, how much we demand of models.

RMN: Does that mean then you shift back to the traditional underwriting experience and base your capital planning and pricing off of that?

Roussos: I don't think that would be a better option.

I'm envisaging is engaging directly with scientists on these issues, but not expecting it to be as simple as buying an off the shelf model which calculates the answer.

Engaging more with expert elicitation exercises, where you talk to a range of scientists about what their view of the climate risk is. Use that as a modifier for a view that you develop on the basis of recent performance, how the market is doing. I think it's just about making it more complex. I wouldn't say it's about withdrawing from engaging with the science and focusing on underwriting experience.

It's more about moving away from thinking models have all the answers, to engaging more directly with the experts. Just facing full on the fact that they are uncertain and they're going to give you different answers. I guess it's in line with embracing the uncertainty thinking that is in this confidence approach.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Risk Market News.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.