Many Choices, One Risk: Systemic Risk And Catastrophe Models

Many Choices, One Risk: Systemic Risk And Catastrophe Models

At the recent Reinsurance Association of America Cat Risk Management conference in Orlando, JB Crozet, head of underwriting at Amlin and Dr. Anders Sandberg, Ph.D, the James Martin Research Fellow at the University of Oxford, presented a discussion on the rise of catastrophe models and how hey are potentially exposing the whole to a larger systemic failure.

RiskMarketNews spoke with Mr. Crozet and Dr. Sandberg to explore their ideas.


 

RiskMarketNews: Could you define what systemic risk is within a catastrophe model market?

JB Crozet: Broadly defined, systemic risk is a structural link within the industry that makes it more collectively vulnerable than as individual organizations.

When it comes to catastrophe modelling, the primary source of systemic risk arises from the uniformity of modelling practices.

First, you have some very historically dominant firms — such as RMS, AIR and EQECAT — that work in very similar ways and use similar methodologies.

Second, we have the outsourcing of regulatory modelling to (re)insurers:

The “Use Test” requires that insurers use a single model across all their applications, whether it’s pricing, exposure management or risk management. That brings a lot of uniformity within organizations which historically wasn’t the case. There would be organisations that liked a certain modelling approach for pricing, but would have different ways of dealing with the overall risk management.

“Model Approval” is likely to promote uniformity across organizations. Regulators can only be comfortable with narrow range of models, so it reinforces their particular use. (Re)insurers know it: they would tend to take the safe route and go with the safe batch for model approval, i.e. the model that everybody else is using. It’s a bit of, “You can’t go wrong with buying IBM” type of attitude.

This has created a market which is “putting its eggs in the same basket” when it comes to modelling, and is hence much more exposed to “rare but extreme” model failures.

Dr. Anders Sandberg: Taking a step back from it, when people are using the term, “systemic risk”, what do we even mean? It turns out, people sometimes talk about systemic risk as a system where everybody is exposed to the same external input.

But I think the crucial thing going on in catastrophe modeling is that we get systemic risk from the structure of the system.  The system is the models, and how the models are used within companies and across the insurance markets. In that sense you have systemic risk when every part works perfectly well on its own, but when you connect them, disasters can happen.

Most of the time it works well. It’s just that failures become correlated across businesses, which means that there can be real trouble hard to predict beforehand.

As JB said, there are a lot of shared models. But also that there are shared assumptions in using the models and shared cognitive biases. Everybody that uses catastrophe models is human, and humans have similar kinds of cognitive biases. This means that we get systemic model risk from cognitive biases.

RMN: How do you deal with model users that get frustrated when the models show different results for the same risk, say a 1-in-100 year storm? I would assume you would want different answers based on different models?

Sandberg: It’s a very human thing to assume there must be one answer to a question, which is problematic because many questions are actually ill-posed.

A model is a particular way of getting an answer to a particular question, and one shouldn’t be surprised that different models would give different answers. Which is actually a very useful thing because that gives you some sense of the model uncertainty going on.

The problem is that if you build models based on the same historical data — and maybe the model companies are looking at each other and acting in a similar matter because they’re competing — now they’re giving more similar results. That means the model uncertainty has not gone away, it’s just that it’s much harder to see.

You need to be a bit of a connoisseur of how much you can trust that number in that situation. So, the people who really don’t like getting different estimates, they probably need to think more about why do they want to have a particular one. Should it be actually accepted? “This is uncertain, I need to use more of my judgment here.”

Anchoring bias and the asymmetric error checking likely very prevalent in the catastrophe modelling market.

If somebody has been using a number and a new model comes in, suddenly it sets them completely outside of their comfort zone. This was the case with RMS 11. It becomes a bit of us against them, and you see the sort of market pressure from some of the brokers and the local markets in trying to refute it in very one-sided way.

I think this is the reality of what the model vendors have to face. At some point the will encounter commercial pressure to align with the output of their competitors, otherwise there’s a lot of exposure to either not getting business or being selected again. As JB pointed out, the anchoring bias is hiding in the background here.

There are the intellectual reasons you can use when discussing models, but deep down there is this low-level tendency to just get stuck on numbers. We tend to believe that the truth somehow is closer to those numbers.

You can demonstrate by asking people to state their phone number, then starting to ask them about the population of various major cities. You will find that their estimates will gravitate a bit in the direction of their phone number. It’s a statistical tendency and it’s very hard to get rid of, because it’s a low-level property of our brain. We get anchored on certain estimates, and then of course, new versions are judged a little bit compared to those estimates.

Crozet: Asymmetric error checking happens when you’re trying to look for errors in a way that most benefits you or reinforces your expectations.

If Vendor “A” is a lot more conservative on certain part of the world and hence priced out of modelling ILS, there might be a tendency towards accepting more easily the research that brings you towards the other, more commercially successful models. I think that’s just a reflection of human biases.

Commercial and political influence can also affect models choices, especially when the authorities must approve models for pricing or capital setting purposes. They quite often have a particular view which might not be entirely guided just by getting the most truth-seeking model. For instance, they might be interested in keeping premiums low for their constituents.

That came up during Tōhoku earthquake: there was an official view in Japan on what was possible and it ruled out a magnitude nine earthquake. From a vendor point of view, a model which did not reflect the official view just wouldn’t sell. In the same way that a US Earthquake model without the USGS methodology wouldn’t sell.

RMN: Are third-party modeling firms trying to address these systemic issues in some ways?

Sandberg: I think they are, to some extent, trying to address them.

I am not certain they are coming to it from our angle where we are thinking, “Systemic risk is a problem, what can we do to fix it?” But they are thinking of transparency and making model users more empowered.

One very clear way of improving models, for example, is addressing the autopilot bias and to make things more transparent.

Autopilot bias is a recognized problem for real airplanes and it, of course, shows up in the smaller form with all the little black boxes and driver’s aids in modern cars. We have these autopilots we rely on, and most of the time, they work so well that it would be stupid not to use the GPS with your phone to get to a location, except, of course, when it fails. You can see the rather obvious analogy here to what’s happening here with models and insurance.

In terms of transparency, modelers actually do seem to realize that they need more transparency and the need to allow customers and clients to investigate the models in a new way. I think they’re coming to this for various reasons, but I think that’s a very helpful thing.

Crozet: There’s definitely an intellectual interest, as they are also constrained in their use of certain methodologies as we discussed earlier.

But I think the Systemic Risk of Modelling is ultimately our industry’s problem, and it is about the (re)insurance industry being at risk, recognising it and addressing it. We shouldn’t look to the vendors, who are in the business of selling software solutions, to fix our problems for us.

RMN: What about the influence of some of the open source models that have come out, such as the Global Earthquake Model and Oasis on systemic risk?

Crozet:  I think they could play a huge role by offering more diversity in the market.  And diversity of models reduces our exposure to a specific model failure.

But their existence does not necessarily change the actual behaviors. The real question is how credible, deployable they are, and how much room there is for insurers to go to regulators and say, “I’m now using the Oasis platform to set my regulatory capital”.

Sandberg:  There is also the problem that you can, of course, overdo it.

If it’s too easy to combine models you might get the situation where model shopping happens, when people select models not so much because the model is the best view of the risk on the world, but because it gives market advantage in some domain or in some particular negotiation. That would be the kind of extreme where it gets very cheap and easy to do it. There is some optimum in-between a model monoculture and the model supermarket.

RMN: What about the influence of ILS, the third party capital firms on the catastrophe modeling industry>

Crozet: The ILS market is a great example of modeling becoming more and more influential in our industry. We have attracted investors who are not (re)insurance experts, like pension funds, on the basis of our modeling of catastrophe risks. That places models at the center of that segment of the market.

However, there is a huge concentration of placements using a particular vendor in the ILS market, which means that any significant failure in that model could threaten many ILS placements simultaneously.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Risk Market News.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.