Latest News

- 1

read more...

- 1

read more...

- 1

read more...

Understanding, communicating and managing uncertainty and risk related to future changes in catchments.

CCN News

Testing catchment models as hypotheses
added on 01 10 2009 by Clare Black
I am currently guest editing the second Annual Review issue for Hydrological Processes planned for early in 2010 which will have a number of contributions Read more..

I am currently guest editing the second Annual Review issue for Hydrological Processes planned for early in 2010 which will have a number of contributions focused on preferential flows in catchments and the estimation of mean travel times or mean residence time distributions in catchments.  Both of these pose interesting issues in respect of all three focus areas in the Catchment Change Network – flood generation, water quality and water scarcity. Particularly in the water quality area, the way that they are linked will have an impact over both short and longer time scales.  Despite this importance, our understanding of both preferential flows and travel time distributions is, however, still limited and this got me on to thinking about developing that understanding through predictive models treated as hypotheses about how a catchment system function.

This has some implications about predicting the effects of change since we clearly cannot easily test hypotheses (or sets of modelling assumptions) about what might happen in a particular catchment of interest in the future, we more usually rely on testing hypotheses under current conditions and, given a degree of belief that we are getting the right results for the right reasons, explore the consequences for scenarios of future change. Increasing that degree of belief is the purpose of testing but there are two difficulties involved in this process.  The first is that, as with classical statistical hypotheses testing, there is a possibility of making Type I (false positives, or incorrectly accepting a poor model) or Type II (false negatives, or incorrectly rejecting a good model), particular when there are observational errors in the data being used in testing.  The second is that this process of predicting future change relies on a form of uniformitarianism principle; i.e. that a model that has survived current tests has the functionality required to predict the potentially different future conditions. In both cases, classical hypothesis testing will be limited by epistemic errors (see the previous entry of 27th September) in the observations and in our expectations about future processes.

That does not mean, however, that we should not try to test models as hypotheses, only that new ways of doing so might be required. We could, for example, explore the possibility of using real-world analogues for different scenarios of future conditions with (approximately) the right combinations of expected temperatures, land use and rainfalls to show that, if there are significant differences in processes the predictive model can represent them acceptably.  The analogues would not, of course, be perfect (uniqueness of place suggests that calibrated parameter values would also necessarily reflect other factors) but this might increase the degree of belief in model predictions of future change rather more than relying on a model that has only been shown to reproduce historical conditions at the site of interest.  As far as I know, no such study has been reported (although analogues have been used in other ways)…does any reader know of such a study?

App mix si swimsuit, treasure grab, hotelpal, layers by charles starrett wednesday, july 22, 2009 sports illustrated has www.spying.ninja/how-to-detect-spyware-on-android released si swimsuit 2009, its new application for the iphone and ipod touch

Comments: 0

Leave a Reply

From one meeting to another...
added on 29 09 2009 by Clare Black
This week it was a workshop in Bristol organised by the NERC scoping study on risk and uncertainty in natural hazards (SAPPUR) led by Jonty Rougier of Read more..

This week it was a workshop in Bristol organised by the NERC scoping study on risk and uncertainty in natural hazards (SAPPUR) led by Jonty Rougier of the BRisk Centre at Bristol University. The study is due to report at the end of November, with a summary of the state of the art in different areas of natural hazards and suggestions for a programme of research and training to be funded by NERC. This will have relevance for all three focus areas in CCN, including the specification of trading needs.

It will not be surprising that many of the issues overlap with those that arose in the sessions at Hyderabad (see last entry). The discussions touched on the definition of risk, the assessment of model adequacy, the quantification of hazard and risk, and techniques for the visualisation and communication of uncertainties. There were interesting presentations from David Spiegelhalter on methods used in the medical sciences and Roger Cooke on methods used in the elicitation of expert opinions.

John Rees, the NERC Theme Leader for Natural Hazards, raised the following questions that he felt were important for this scoping study to address:

  • If model uncertainty is needed to better inform policy decisions how is it best quantified?
  • How should alternative conceptual models and evidence contradictions be used in policy and decision making?
  • Is the mean value the appropriate safety metric to inform decisions?
  • What is best way to represent scientific consensus?
  • What are useful mechanisms for integrating risk and uncertainty science into policy development?
  • What should be addressed by the research councils (there is a provisional budget of £1.5m available to support the research programme)?

There was a general recognition amongst the participants, who covered a range of different natural hazards, that the proper evaluation of hazard and risk is often difficult, in that we often have only sparse or no data with which to try and quantify sources of uncertainty and that there may be many different alternative predictive models of varying degrees of approximation. These are the epistemic uncertainties but there was not much discussion about how these might be reflected in the quantification of risk. Many participants seemed to accept that the only way to attempt such a quantification was using statistical methods. I am not so sure.

It is true that any assessment of uncertainty will be conditional on the implicit or explicit assumptions made in the assessment (which might involve treating all sources of uncertainty as if they can be treated statistically). It is also true that those assumptions should be checked for validity in any study (though this is not always evident in publications). But if the fact that the uncertainties are epistemic means that the errors are likely to strong structure and non-stationarity that will depend on a particular model implementation, then it is possible that alternative non-statistical methods of uncertainty estimation might be appropriate.

I have been trying to think about this in the context of testing models as hypotheses given limited uncertain data (something that frequently arises in the focus areas of CCN). Hypothesis testing means considering both Type I and Type 2 errors (accepting false positives and rejecting false negatives). An important areas of CCN is how to avoid both types of errors in model hypothesis testing so that in prediction we are more likely to be getting the right results for the right reasons. So an interesting question is what constitutes an adequate hypothesis test, adequate in the sense of being fit for purpose. This question was addressed, at least indirectly, by Britt Hill of the US Nuclear Regulatory Commission in a talk about the performance assessment process for the safety case for the Yucca Mountain repository site.

In that study, Monte Carlo simulation was used to explore a wide range of potential outcomes (in terms of future dose of radioactivity to a local population over a period of the next 1 million years or so. A cascade of model components from infiltration to waste leaching was involved in these calculations, each depending on multiple (uncertain) model components. The Monte Carlo experiments spanned a range of alternative conceptual models and possible model parameters. Decisions about which models to run appeared to have been produced by scientific consensus, something that Roger Cooke had earlier suggested was not necessarily the best way of extracting information from experts.

There is no explicit hypotheses testing in this type of approach, only some qualitative assessment of whether performance is “reasonably supported” in terms of predictions in past studies, history matching, scientific credibility etc. But it is sometimes the case that, for whatever, reasons, even the best models do not provide acceptably predictions for all times and all places. This could be because of errors in the forcing data, it could be because of model structural error, it could be because of error in the data with which a model is evaluated. It remains impossible to really separate out these different sources of error, and it therefore means that it is difficult to do rigorous hypothesis testing for this type of environmental model.

This seems to be an area where further research is needed. It is surely important in developing guidance for model applications within each of the three CCN focus areas…

Their results so far, the researchers say, suggest that the writing paper within https://paperovernight.com/ answer to both questions might be yes

Comments: 0

Leave a Reply

EEA joins forces with European Water Partnership
added on 28 09 2009 by Clare Black
The European Environment Agency and the European Water Partnership (EWP) announced today a new cooperation plan to improve water use in Europe. The first Read more..

The European Environment Agency and the European Water Partnership (EWP) announced today a new cooperation plan to improve water use in Europe. The first initiatives of the cooperation will be to develop a vision for sustainable water, raise awareness and strengthen information flows. “To be truly effective and relevant, environmental policy must be developed together with the actors who will work with it. For the water area, this means involving those who actually use, distribute and treat water such as agriculture, water utilities, industries, the energy or transport sector. This cooperation with EWP and its partners is a crucial step for us in that direction” said Professor Jacqueline McGlade, Executive Director of the EEA.

Comments: 0

Leave a Reply

Met Office warns of catastrophic global warming in our lifetimes
added on 28 09 2009 by Clare Black
Unchecked global warming could bring a severe temperature rise of 4°C within many people's lifetimes, according to a new report for the British government Read more..

Unchecked global warming could bring a severe temperature rise of 4°C within many people’s lifetimes, according to a new report for the British government that significantly raises the stakes over climate change.

The study, prepared for the Department of Energy and Climate Change by scientists at the Met Office, challenges the assumption that severe warming will be a threat only for future generations, and warns that a catastrophic 4°C rise in temperature could happen by 2060 without strong action on emissions.

“We’ve always talked about these very severe impacts only affecting future generations, but people alive today could live to see a 4°C rise,” said Richard Betts, the head of climate impacts at the Met Office Hadley Centre, who will announce the findings today at a conference at Oxford University. “People will say it’s an extreme scenario, and it is an extreme scenario, but it’s also a plausible scenario.”

Further reading.

Comments: 0

Leave a Reply

Catchment Models as Hypotheses
added on 27 09 2009 by Clare Black
I am just back from the , India, held jointly with the . There were a number of sessions relevant to the focus areas of CCN. Sessions included flood Read more..

I am just back from the , India, held jointly with the . There were a number of sessions relevant to the focus areas of CCN. Sessions included flood risk management; surface water groundwater interactions; hydroinformatics; new statistics in hydrology; minimising adverse impacts of global change on water resources; sustainability of groundwater; improving integrated water resources management; hydrological prediction where data is sparse; hydrological theory and limits to predictability in ungauged basins; use of isotope tracers; precipitation variability and water resources;  There was also a session reporting on progress in the working group of IAHS with an outline of the Benchmark Report that is currently being prepared.

One thing that generated quite a lot of discussion during the meeting was the requirement to have catchment models for whole regions or countries or even the globe.  Kuni Tacheuchi from Japan reported on a global system based on remote sensing information to try and provide flood alerts for areas where data is sparse and local infrastructure might not be well developed. The predictions of this model are provided freely so that local hydrologists can assess and modify them if necessary. Berit Arheimer reported on the application of the new SMHI semi-distributed model which combines hydrological and water quality predictions for the whole of Sweden as a tool for assessments for the Water Framework Directive, while Olga Semenova from the State Hydrological Institute in St. Petersberg presented the results of applying a semi-distributed model to large catchments in Russia and elsewhere.  In both these cases, applications of the models depend on the identification of hydrological response units, with an expectation that parts of the landscape that belong to the same response units should have similar parameters.  In Sweden the parameter values are estimated by fitting to some of the available gauged basins and then applied across the whole country with evaluation at other gauges.  Success, including predictions of nutrients and isotope concentrations, was very good in some catchments, not so good on others but evaluation over such a large range of conditions should lead to future improvements.  In Russia, prior estimates of parameter values are used, based on physical arguments, but with evaluation across a wide range of gauges.

I have written elsewhere about “models of everywhere” of this type (see ) and gave a short presentation about these ideas at short notice when there was an empty slot in the session on limits to predictability. I am not so convinced that it will be possible to find parameter values that will give good results everywhere but the advantage of this type of model is that the visualisations of the results mean that local stakeholders will be able to look at the predictions and make their own evaluations of how well the model is doing. This can then feedback directly into model improvements, in particular where local knowledge suggests that there are deficiencies in the predictions. Effectively this form of evaluation can be treated as hypothesis testing of the local implementation of a model, with a requirement to respond when it is suggested that the model as hypothesis can be rejected. This would essentially constitute a learning process about local places in improving the models. Uncertainty estimation has to be an important part of such evaluations, of course, because we would not want to reject a model just because it has been driven by poor input data.  In the Swedish case, it was apparent that the predictions were not so good in some of the catchments in the mountains towards the Norwegian border where the input data on rainfalls and snowmelt were probably not so accurate.  In the case of the Russian model, they feel it is fair to adjust some of the snow accumulations each year since these are poorly known.  Neither currently do this within any formal uncertainty analysis.

Some of the discussion after the presentation was concerned with what such a learning process would look like.  Would it mean to adjust local parameters within some general model structure to get the best fit possible (Hoshin Gupta) or would it mean local modifications to model structures with the danger that we might end up with ad hoc modifications to deal with local circumstances (Ross Woods)? I suspect we will need something of both. There would be some advantage in moving from model testing on gauged sites to application on ungauged sites to retain a general model structure, but if such a model is to be truly general then it might be more complex than is needed in many catchments (even if components could be switched on or off).  It might also be the case that different types of applications might need more or less complex models (see some of our flood forecasting work using very simple transfer function models, e.g. ). Olga Semenova argued that only one model structure should be necessary if it is based on adequate process representations (we disagree about whether the process representations in the Russian model are adequate, even if their results give reasonable simulations).  Her argument is that a successful hydrological model should be tested over a wide range of conditions, rather than just being calibrated locally, and should perform well across all applications. If not we should look for something better.

Models of everywhere of this type are likely to become more common in the future, driven by the needs of the Water Framework Directive, Floods Directive and integrated water resources management. Thus the discussion about what a good model should look like and how models should be tested is likely to go on… It is certainly relevant to applications involving catchment change. Something for our readers to add comments on…

The options are similar to those used for synchronizing music, and include the ability to synchronize all gps cell phone tracker find movies, unwatched movies only, or only selected movies or playlists

Comments: 0

Leave a Reply