Catchment Models as Hypotheses

I am just back from the , India, held jointly with the . There were a number of sessions relevant to the focus areas of CCN. Sessions included flood risk management; surface water groundwater interactions; hydroinformatics; new statistics in hydrology; minimising adverse impacts of global change on water resources; sustainability of groundwater; improving integrated water resources management; hydrological prediction where data is sparse; hydrological theory and limits to predictability in ungauged basins; use of isotope tracers; precipitation variability and water resources;  There was also a session reporting on progress in the working group of IAHS with an outline of the Benchmark Report that is currently being prepared.

One thing that generated quite a lot of discussion during the meeting was the requirement to have catchment models for whole regions or countries or even the globe.  Kuni Tacheuchi from Japan reported on a global system based on remote sensing information to try and provide flood alerts for areas where data is sparse and local infrastructure might not be well developed. The predictions of this model are provided freely so that local hydrologists can assess and modify them if necessary. Berit Arheimer reported on the application of the new SMHI semi-distributed model which combines hydrological and water quality predictions for the whole of Sweden as a tool for assessments for the Water Framework Directive, while Olga Semenova from the State Hydrological Institute in St. Petersberg presented the results of applying a semi-distributed model to large catchments in Russia and elsewhere.  In both these cases, applications of the models depend on the identification of hydrological response units, with an expectation that parts of the landscape that belong to the same response units should have similar parameters.  In Sweden the parameter values are estimated by fitting to some of the available gauged basins and then applied across the whole country with evaluation at other gauges.  Success, including predictions of nutrients and isotope concentrations, was very good in some catchments, not so good on others but evaluation over such a large range of conditions should lead to future improvements.  In Russia, prior estimates of parameter values are used, based on physical arguments, but with evaluation across a wide range of gauges.

I have written elsewhere about “models of everywhere” of this type (see ) and gave a short presentation about these ideas at short notice when there was an empty slot in the session on limits to predictability. I am not so convinced that it will be possible to find parameter values that will give good results everywhere but the advantage of this type of model is that the visualisations of the results mean that local stakeholders will be able to look at the predictions and make their own evaluations of how well the model is doing. This can then feedback directly into model improvements, in particular where local knowledge suggests that there are deficiencies in the predictions. Effectively this form of evaluation can be treated as hypothesis testing of the local implementation of a model, with a requirement to respond when it is suggested that the model as hypothesis can be rejected. This would essentially constitute a learning process about local places in improving the models. Uncertainty estimation has to be an important part of such evaluations, of course, because we would not want to reject a model just because it has been driven by poor input data.  In the Swedish case, it was apparent that the predictions were not so good in some of the catchments in the mountains towards the Norwegian border where the input data on rainfalls and snowmelt were probably not so accurate.  In the case of the Russian model, they feel it is fair to adjust some of the snow accumulations each year since these are poorly known.  Neither currently do this within any formal uncertainty analysis.

Some of the discussion after the presentation was concerned with what such a learning process would look like.  Would it mean to adjust local parameters within some general model structure to get the best fit possible (Hoshin Gupta) or would it mean local modifications to model structures with the danger that we might end up with ad hoc modifications to deal with local circumstances (Ross Woods)? I suspect we will need something of both. There would be some advantage in moving from model testing on gauged sites to application on ungauged sites to retain a general model structure, but if such a model is to be truly general then it might be more complex than is needed in many catchments (even if components could be switched on or off).  It might also be the case that different types of applications might need more or less complex models (see some of our flood forecasting work using very simple transfer function models, e.g. ). Olga Semenova argued that only one model structure should be necessary if it is based on adequate process representations (we disagree about whether the process representations in the Russian model are adequate, even if their results give reasonable simulations).  Her argument is that a successful hydrological model should be tested over a wide range of conditions, rather than just being calibrated locally, and should perform well across all applications. If not we should look for something better.

Models of everywhere of this type are likely to become more common in the future, driven by the needs of the Water Framework Directive, Floods Directive and integrated water resources management. Thus the discussion about what a good model should look like and how models should be tested is likely to go on… It is certainly relevant to applications involving catchment change. Something for our readers to add comments on…

The options are similar to those used for synchronizing music, and include the ability to synchronize all gps cell phone tracker movies, unwatched movies only, or only selected movies or playlists

Comments are closed.