Handling Uncertainties in Catastrophe Modelling

I am on the train on the way back from a meeting at Lloyd’s of London on Handling Uncertainties in Catastrophe Modelling for Natural Hazard Impact. The meeting was organised by another Knowledge Transfer Network on Industrial Mathematics Special Interest Group (SIG) for Environmental Risk Management, which is also supported by NERC. The SIG has prioritized the insurance industry in this area and the meeting brought together both academics and representatives from underwriting companies and risk modelling companies.

The morning talks gave a perspective on handling uncertainties from the insurance industry perspective. It is clear that they know only too well that their predictions of expected losses from extreme natural events are often based on rather uncertain input data and model components (and exposure to losses not currently included in models) but that they are already looking forward to being able to take account of some of the relevant uncertainties.   One of the issues in doing so however was that some of the current models will take a week or two to run a single deterministic loss calculation. There was some hope that a new generation of computer technology, such as the use of graphics processing units (GPUs), would reduce model run-times sufficiently to allow some assessment of uncertainty (they clearly have not tried programming a GPU yet, though this is getting easier!). One presentation suggested that being able to make more and more runs would allow uncertainties to be reduced.  Over lunch I asked what he really meant by this… it seemed that it was only that the estimation of probabilities for a given set of assumptions could be made more precise given more runs.

There was a demonstration of this in the afternoon in an interesting study to estimate the uncertainty in losses due to hurricanes in Florida.  5 insurance modelling companies had been given the same data and asked to estimate both the expected loss for given return periods of events (up to 1000 years) and a 90% confidence range.Two of the companies had run multiple long term realisations of a given sample distribution of events based on the prior distributions of event parameters.  Their confidence limits became smaller as the number of realisations increased and improved the integration over the possible distribution of events allowed by the fixed prior distributions. Two other companies had taken a different strategy, running realisations of a length consistent with historical data periods and resulting in much wider uncertainty limits.  Uncertainty estimations, particularly when not conditioned on historical data, will always depend directly on the assumptions on which they are based!  An analysis of the Florida hurricane study had suggested that the uncertainty in the estimated hazard was more important than uncertainty in the estimated vulnerability. I am not sure that this would necessarily be the case in estimating flood risk.

There was some discussion of how to convey these assumptions to the people who actually take the risk for insurance companies in committing to contracts, and whether they should be allowed to play with dials that would allow sensitivities of estimated losses to vary with different parameters.  Given long model run times and short decision times in the real world this was generally not considered feasible (although more flexibility to explore model sensitivities rather than the ‘black box’ results provided currently, was suggested). There was also a suggestion that it was as important to “understand what is not in the models” as to understand sensitivities to what was in the models and that “adding more science” would not necessarily be considered advantageous in an industry with a 300 year old tradition.

One thought that came to me during the meeting was inspired by a passing mention of the verification of uncertainty estimates.  It seems to me that this would (a) be very difficult with any form of extreme event and (b) would never happen anyway because data from a new extreme event will be used to revise estimates of prior probabilities that might have been used in estimating uncertainties.  We know that this happens in flood risk estimation when every new extreme flood is used to revise the estimates of the probabilities of exceedence at a site.  Enough for now, it was an early start this morning!!

If it is not filled in the show may not appear properly on your device, and will not appear at all on a fifth-generation ipod

Comments are closed.