Wednesday, April 27, 2011

Karen Clark on Catastrophe Models


Karen Clark, one of the founders of the catastrophe modeling industry, is interviewed by Insurance Journal in the podcast linked above. There is also an edited transcript here.

Here is an excerpt from the accompanying news story:

The need for insurers to understand catastrophe losses cannot be overestimated. Clark’s own research indicates that nearly 30 percent of every homeowner’s insurance premium dollar is going to fund catastrophes of all types.

“[T]he catastrophe losses don’t show any sign of slowing down or lessening in any way in the near future,” says Clark, who today heads her own consulting firm, Karen Clark & Co., in Boston.

While catastrophe losses themselves continue to grow, the catastrophe models have essentially stopped growing. While some of today’s modelers claim they have new scientific knowledge, Clark says that in many cases the changes are actually due to “scientific unknowledge“— which she defines as “the things that scientists don’t know.”
These comments are followed up in the interview:
Your concern is that insurers and rating agencies, regulators and a lot of people may be relying too heavily on these models. Is there something in particular that has occurred that makes you want to sound this warning, or is this an ongoing concern with these?

Clark: Well, the concern has been ongoing. But I think you’ve probably heard about the new RMS hurricane model that has recently come out. That new model release is certainly sending shockwaves throughout the industry and has heightened interest in what we are doing here and our messages…. [T]he new RMS model is leading to loss estimate changes of over 100 and even 200 percent for many companies, even in Florida. So this has had a huge impact on confidence in the model.

So this particular model update is a very vivid reminder of just how much uncertainty there is in the science underlying the model. It clearly illustrates our messages and the problems of model over reliance.

But don’t the models have to go where the numbers take them? If that is what is indicated, isn’t that what they should be recommending?

Clark: Well, the problem is the models have actually become over-specified. What that means is that we are trying to model things that we can’t even measure. The further problem with that is that these assumptions that we are trying to model, the loss estimates are highly sensitive to small changes in those assumptions. So there is a huge amount of uncertainty. So just even minor changes in these assumptions, can lead to large swings in the loss estimates. We simply don’t know what the right measures are for these assumptions. That’s what I meant… when I talked about unknowledge.

There are a lot of things that scientists don’t know and they can’t even measure them. Yet we are trying to put that in the model. So that’s really what dictates a lot of the volatility in the loss estimates, versus what we actually know, which is very much less than what we don’t know.
In the interview she recommends the use of benchmark metrics of model performance, highlights the important of understanding irreducible uncertainties and gives a nod toward the use of normalized disaster loss studies.  Deep in our archives you can find an example of a benchmarking study that might be of the sort that Clark is suggesting (here in PDF).

0 comments:

Post a Comment