The announcement this week of a new aggregator of European insured values is the latest effort by the industry to understand what has become the Achilles Heel of catastrophe models: exposure data.
With poor exposure data, model results often veer wildly from actual losses and add to the insurance industry’s worries around “model risk.”
Whether it’s aggregating data or estimating insured values using new statistical methods, getting exposure data right has become an industry-wide goal.
“The rating agencies and regulators are realizing that data quality is an important part of model accuracy,” says Ajay Lavakare, senior vice president and managing director of data solutions at RMS in California. “Having quality data can greatly reduce the uncertainty in the model results.”
This week a consortium of reinsurers, carriers and brokers announced the formation of PERILS AG, a Zurich-based company that will aggregate European insurance data and sell the results through a subscription service.
PERILS offer two main services: aggregated industry-wide exposure data and loss estimates following major catastrophes. PERILS will begin collecting information on European windstorm risks and eventually expand to cover other perils in Europe.
The firms involved in PERILS are Allainz, AXA Groupama, Guy Carpenter, Munich Re, PartnerRE, Swiss Re and Zurich.
The goal of PERILS is to provide consistent industry data to improve the understanding of potential natural catastrophes and — in turn — enhance the modeling of natural cats, said Erik Rüttener, Ph.D., head of catastrophe research and modeling at PartnerRe.
“The insurance industry lacks transparency around European windstorm losses and PERILS looks to address that issue,” Rüttener says, adding that improved data quality could be used to develop industry loss indexes that would spur the growth of capital market products, such as ILWs and cat bonds.
The idea of aggregating of exposure data is not new and has been stopped in the past by insurer worries of confidentiality. Access to exposure data can easily be transformed into insights about an insurer’s private client information.
Dr. Rüttener says PERILS will have a significant focus on data privacy. “Confidentiality is a key priority and we will have a robust IT infrastructure that will ensure that the data remains confidential,” he says. “We will also have safeguards and guidelines to make sure the only thing that is shared is the aggregated data.”
Model Firm Push
Beyond the PERILS group, other firms have been attempting to address the exposure issue with their own services.
RMS began offering its own “ExposureRefine” service in 2006 that looks to use proprietary and public data to help insurers improve the exposure information they enter into their models.
“Data quality is the one controllable element that users have when using their models,” says Lavakare. “If insurer has missing data they use our database to fill in the blanks”
Lavakare says there are three elements that insurers need to focus on when entering exposure data: the location (inland, on a fault or beachside), the risk (the physical characteristics of a building) and the value of the property (how much the building is worth and likely business interruption costs).
Exposure data determines whether the valuation of properties is correct and whether the entire property portfolio is valued properly, Lavakare adds.
Getting exposure data right has become an increasingly important issue since rating agencies and regulators are focusing on property/casualty risk assessments and the industry’s reliance on models.
“Everyone is pushing to make sure the data is input completely and accurately,” Lavakare says.
Enjoying these posts? Subscribe for moreSubscribe now
Already have an account? Sign in