Scale, lead times and predictability

Here are some thoughts arising from participating in a recent EU IMPRINTS project meeting (held at Stresa by the shore of Lake Maggiore but accompanied by snow!). The IMPRINTS project (see www.imprints-fp7.eu) overlaps with CCN in being concerned with flash flood and debris flow forecasting although there is an emphasis on Mediterranean case studies. As such it is concerned with risk and uncertainty, catchment change and climate change. Various hydrological models are being used, various ensemble forecast inputs are being used, some simple forecasts of land use change are being used, some climate change projections will be used. A number of practitioners from the case study basins are involved in the project, particularly in asking some difficult questions of the researchers about the meaning of the uncertainties that are being estimated, and whether flash floods and debris flows are likely to be more common in future.

One of the really interesting questions that arises in such a project is concerned with scale.  The predictions from distributed hydrological models are driven by either radar or numerical weather prediction (NWP) products at grid scales of 1 -10 km. Hydrological model parameters are also estimated at scales that can be larger, or may be calibrated or conditioned at larger catchment scales. Locally damaging flash floods and debris flows can be triggered by rainfall variability, interacting with local antecedent patterns of soil water, at still smaller scales. There is therefore uncertainty in how the larger scale might inform the smaller scale in evaluating potential risk.  One strategy that has been explored in the Imprints project is to run the hydrological model for a long simulation period driven by the same input product that will be used to drive longer term forecasts (the COSMO NWP 30 year reanalysis in Imprints) and then define a level of risk relative to the frequency of occurrences in the long term simulation.  This allows an first assessment of risk from 5 days ahead, and assessing consistency in any high risk locations as a potential event gets closer.

 This should be a useful tool in detecting alerts to potential flash flood and debris flow situations.  It is, however, too crude a tool for deciding about more detailed local warnings at shorter times.  This needs more accurate assessments of the spatial estimates of rainfalls, coupled with more detailed hydrological modelling of antecedent conditions and runoff generation to give more precision in localizing potential flash flood and debris flow sites (possibly at sub-grid scales).  Radar nowcasting can give rainfall inputs down to 1 km scales, but accuracy decreases rapidly with lead times. At longer times, these estimates can be blended into the outputs from ensemble NWP, but NWP systems are not tuned towards giving accurate precipitation estimates in 12 to 24 hour time scales. In the hydrological model, it will be difficult to identify local parameter values and sub-grid variability in predicting the antecedent conditions and runoff generation.

Thus, the estimation of local risk at lead times that will be useful for warning purposes will be uncertain, in ways that mean that it will be difficult to estimate the uncertainty (some studies that have been done so far suggest that the range of uncertainty in 5 day ahead forecasts based on the COSMO-LEPS NWP ensemble can certainly bracket what actually happens – but because the uncertainty bounds are so wide). This suggests that some “rules of thumb” might need to be invoked – which might again take the form of assessing relative risk.  Just how high does the estimated local grid scale risk at different lead times have to be before a local warning should be issued?  And just how does this vary in space? The IMPRINTS project aims to explore some (fuzzy) rules for such assessments. Ideally, the rules would be evaluated or validated on the basis of assessing a past sequence of estimates relative to actual occurrence of flash floods and debris flows causing damages to communities at risk. But, such occurrences are rare by nature – the extremes of a distribution of events – and events that did not cause significant damages in the past might not have been recorded.  This implies that future events might then be critical in assessing and refining the system. But the IMPRINTS project lasts only 3 years: this suggests that the tools developed need to be simple enough to be refined by practitioners who have responsibility over longer periods of time.

But, longer periods of time also imply catchment change both in land management and climate variabilities or changes.  The predictions of both are locally highly uncertain, although it would be feasible to run the hydrological model under different potential scenarios by using stochastic realisations of both land use and future weather to modify past spatial patterns of inputs (this is an area of clear overlap with CCN).  At the IMPRINTS meeting there was a discussion about whether such highly uncertain predictions could be informative for future decision making – over and above some other means of deciding on the value of investment to offset the potential of future damaging impacts. This is a recurring theme in impact studies and research funding. Nearly all such studies have an underlying concept based on a risk-based decision framework: estimate the probability of future outcomes and the costs of mitigation to prioritize investments that will have most value. This is one evidence-based strategy that can be used to justify major government investment on mitigation measures. 

The case of the Kielder reservoir in Northumberland is perhaps instructive in this respect.  Kielder was designed to service the growth of heavy industry in the north-east of England.  It was commissioned in 1981, just before the decline of heavy industry in the north-east of England. The north-east is now well protected against potential future water shortages but not because of the precision of future projections of risk – which were quite wrong in this case.  But investments in mitigation strategies still require prioritization.  Even if we only consider flood risk, there are always many more demands for spending on defence measures than money allocated each year.  Prioritization in that case is based on rules for (a rather simple and largely deterministic) cost-benefit analysis.  Is it possible that detailed studies of other types of impacts of future change could lead to similarly simple rules for prioritization (for example in the way in which OFWAT will require that climate change be incorporated into the next AMP submissions by the water utilities, to cover the period 2015-2020)? And how should these rules reflect the uncertainties underlying the estimation of impacts? Built-in storage and an improved user interface are reportedly on the cards for such an upgraded apple tv, though we won’t know for sure until, or indeed if something is announced

Comments are closed.