Dismal Science Faces The Dismal Modeling Truth With COVID-19

Catastrophe Q&A With BMS Re's Siffert

Chris Westfall
Chris Westfall

Sign up as a paid subscriber today, which includes the weekday newsletter, and you get 10% off the regular price.

Get 10% off for 1 year

The “Tragic Error” In COVID-19 Modeling

With COVID-19 smothering global trade and finance, economists have taken to incorporating epidemiological models into their own forecasts in an attempt to understand the impact of health policy on economic outcomes.

There is some common ground found in both sciences when it comes to modeling current coronavirus outbreak, says James Stock, a professor of Political Economy at Harvard University.

“I have been taken aback how much of the epidemiological model toolkit is the same as the economic toolkit,” Stock told Risk Market News.  “Economists have models where people are looking for jobs. In epidemiology you have viruses looking for people. The math is surprisingly close.”

But the outcomes of both models are struggling with a key input needed to successfully model COVID-19: accurate testing for asymptomatic people.

In recent weeks academic economists from Harvard University and the University of Chicago have issued research attempting to model outcomes from the current crisis only to come to the similar conclusion.

They agree that the lack of accurate and widespread testing of the general population is hampering both economists and epidemiologists from modeling outcomes.

“We need to solve for the number of infections. There is really no understanding of that,” adds Harvard’s Stock.

“Testing at a higher rate in conjunction with targeted quarantine policies can dampen the economic impact of the coronavirus and (reduce peak symptomatic infections—relevant for hospital capacity constraints),” says a study from researchers from the University of Chicago, Duke University and the University of Minnesota. “[Targeted] quarantine and testing alters the output-death tradeoff. The government can do better than common quarantine both in terms of deaths and output.”

According to a working paper by Stock published through the National Bureau of Economic Research, economic policy can’t be properly modeled because a key variable in epidemiologic models is unknown: the infected population that is asymptomatic.

“From a health policy perspective, the first priority is to quarantine the whole population and use contact tracing to limit the spread of the virus and control public health consequences,” Stock explains. “But from an economic perspective, the first priority is to test everyone so that you can understand the percentage of the population that is infected and control economic consequences.”

Stocks research adds that testing at a higher rate in conjunction with targeted quarantine policies can  dampen the economic impact of the coronavirus and reduce peak symptomatic infections—”relevant for hospital capacity constraints.”

“We are operating under the assumption now that we have a fairly small population infected, then we just use contact tracing to limit the spread to the general population,” adds Stock, who was a member of President Obama’s Council of Economic Advisers. “But there is no evidence that this premise is correct. What we need to learn is the actual rate of infection.

“The tragic error would be to have six month of economic depression but everyone was already infected but asymptomatic.”

Catastrophe Q&A: Model User

BMS Re’s Andrew Siffert

Position:  Vice President and Senior Meteorologist at BMS Re US Catastrophe Analytics

Education:  An educational background that spans the sciences with a master’s degree in meteorology from the University of Utah

Personal: Andrew works closely with clients to help them manage their weather-related risks by adding value through catastrophe response, catastrophe modeling, product development and scientific research and education.  He has 18 years of industry experience having worked in the energy and insurance industry focusing his meteorological knowledge on helping companies manage their weather risks.

Prior to joining BMS, Andrew worked for FlagstoneRe as a CAT Project Manager / Meteorologist and was responsible for developing LiveCAT products, frontline research, and analytics used in the underwriting process. He has also worked at ACE Group with the development of ACE Global Weather Insurance products and helped pioneer the weather derivatives market and energy trading markets while working at Aquila Energy Corp.

Risk Market News: How many models do you work with at BMS?

Andrew Siffert: We work with all the major modeling companies in some fashion, either through a direct license relationship or on a project-by-project basis.  There are several startup companies that we're also working with on an as-needed basis. At BMS we focus on using the right tools for a client project, so as not to be limited by the tools we only license.

This allows us to utilize the best tools and data for our client projects.

RMN: How do you come across the developing models or model startups?

Siffert:  Word of mouth.

Attending conferences is a great way to network and absorb that word of mouth. I am able to hear what some of my peers are working on. Then they'll say, “Have you checked out this model?”

These new startups have sales teams and know who the big players in brokerage and insurance and what firms are in particular markets. We get emails and people knocking on the door. I'm pretty active on social media too. I'll see what other people are saying about particular risk models or new research.

RMN: What are you most excited about to see and when it comes to developing models?

Siffert: With every peril there's always something we can learn. We're seeing that with hurricane, even though those events and risks are in some of the most trusted and well-aged models. I may be biased given my meteorological background, but I feel there's still a lot to be learned on the risk side of hurricane modeling.

Wildfire has definitely been of interest lately. What's interesting about wildfire is the models exist but they just haven't been as well tested as other perils. In addition, wildfire can be just as difficult to model as a tornado. You can have one house completely destroyed and the house right next door survive. As a result, wildfire is completely changing the market dynamics, especially parts of California.

The wildfire catastrophe simulation models that were developed were intended to be used as a risk aggregation and portfolio loss analysis tools, but there's an increased use of these models as truly an underwriting model. When you start to model at a  pin-point aspect, that’s when one starts to see some issues the might be causing some of the issues that are occurring in the California wildfire insurance market.

But in some ways, I think this is a little bit of putting the cart before the horse when it comes to wildfire, in the sense that the models don’t have that level of very fine resolution that the user needs to understand the hazard at a risk level. I think we're a little bit further on flood models, because either a property is going to be underwater or not based on elevation. When it comes to wildfire there are so many different variables that need to be considered.

RMN: What is your process for validating model outputs?

Siffert: There are several ways we go about validating our models.

One is historical loss analysis. We have this great wealth of data with some catastrophe events especially, hurricane risks. Also, I think it's easier to validate the model to an industry basis and compare those industry losses to the model. These calculations are pretty straightforward in terms of validating the loss.

One can also validate the hazard component. This is using usually like third party data or private governmental data.

Vendors are actually producing a great product, but more has to be done. But I think every model has its weaknesses.

RMN: Based on your validation methods, what are some of the weaknesses you see in the models?

Siffert: A lot of the modeling frameworks have been around since I've been in the industry, which tells you that hasn't been a great deal of innovation. The model vendors have tried, but they're still very old and cliquey. It's very cumbersome to actually get results in different formats and work with the data outside of that particular UI interface.

You have just basic things that seem very common nowadays in technology that just can't be done in a model, such as basic exposure mapping and gathering losses at different aggregated levels. Many times, you need take the formats and put it into a third-party mapping software to do that. The modelers can do more by making the models user friendly and bringing some mapping and visual capabilities into understanding loss results.  Think more interactive dashboards that allow the modeler to quickly dive into the exposure and results to understand and answer the “why” type questions.

The modeling companies continuously update the models. As we just talked about, the wildfire model was just updated and hurricanes are updated every two years or so. But very few modeling platforms out there allow for side by side comparisons. So, can I run last year's data with this year's data and do a side by side comparison? It's usually needs to load a different database and you can't have two different data vintages up at the same time. That problem seems very basic.

In addition, there are some modeling companies, and some private companies, that are trying to keep up to speed on the details of the geocodes and building attributes that will supplement building information to help produce a better loss. These are some of the things that could make modeling much more robust. While it's great that we have this detailed flood model or this wildfire model, if you don't get truly good geocode correct that's a problem.

RMN: Do you think there's room for standardization across all the models?

Siffert: We are part of a discussion on an effort that a group of industry participants are working on to standardize data frameworks.

These data standards will hopefully take into account data inefficiencies. I think it should be a third-party aggregator who just owns all the building information out there. It's already cleaned, and it's already modeled. I think there's a lot of potential there because there are a lot of redundancies in our industry.

RMN: Is there a philosophy behind your model use?

Siffert: We want to use the best model for the job to assist our clients with understanding their view of risk.  Their view is anchored on their loss experience.  I think clearly, start by using historical data and loss ratios. Now that we have some events and have historical loss rates from the model, and claims data. It is common practice to just adjust the model as to what you're not seeing in the claims data.

For example, you have a structure that's set up in a strip mall. That land has been used as a convenience store. But in reality, it could now be a church. So, what is the true definition of what a church is? The modeling company will say, “This is our vulnerability curve based on  claims data to validate what their view of a church risk is which is usually a stand along structure with a steeple and stained glass windows. ” Your own view of that church and the view of its risk is going to be different than the modeling company, based on how we know that church fits a different building that what the modeling companies might think a church is. That’s one way adjusts the model or have your own view of risk.

The other way is in model blending, which is very popular. Your own view of risk can is not based on one model, but multiple model views and you blend that to really mitigate the single model view. You're allowing a deeper understanding of a model because you need to do some extensive testing to arrive at your result.

Unfortunately, model blending can lead to like kind of a false security in that sense that you already envision what you think the model should produce and you are blending to more of your own view. People are allowed to do that.

That's what makes risk our currency. Everybody has their own view of risk, and it’s the cash of our industry and you're allowed to have a different view of what you feel is risky or non-risky.