Latest News

- 1

read more...

- 1

read more...

- 1

read more...

Understanding, communicating and managing uncertainty and risk related to future changes in catchments.

CCN News

Change in land management can benefit farmers, the environment and wider population
added on 06 10 2009 by Clare Black
Farmers should be paid for environmental services to society they were told at a meeting in Cardiff on 1st October - 'Meeting Challenges in Land Management'. Farming Read more..

Farmers should be paid for environmental services to society they were told at a meeting in Cardiff on 1st October – ‘Meeting Challenges in Land Management’.

Farming and land management in Wales can deliver huge benefits for society and the agriculture industry, according to the Head of Agriculture, Forestry and Soils at the Directorate General for the Environment at the European Commission.

More than 90 per cent of Wales is either forestry or farmland. How we manage this land in the future, will be vital to combating climate change and improving our environment.

Different land management techniques could help to store more carbon in the soil, maintain healthier river levels for people and wildlife and keep more rainwater out of rivers and reduce the risk of flooding to rural and urban communities.

It could also contribute to cleaning up the water in Welsh rivers to meet new tougher standards and make a difference to the bathing water quality at beaches throughout Wales.

These changes have recently been recognised in the new agri-environment scheme, Glastir, announced by the Welsh Assembly Government, earlier this year.

Further reading.

To add a chart or a table, https://topspyingapps.com/highster-mobile/ you simply tap on the picture icon on the toolbar and then tap on the charts or table button depending on which one you want

Comments: 0

Leave a Reply

Testing catchment models as hypotheses
added on 01 10 2009 by Clare Black
I am currently guest editing the second Annual Review issue for Hydrological Processes planned for early in 2010 which will have a number of contributions Read more..

I am currently guest editing the second Annual Review issue for Hydrological Processes planned for early in 2010 which will have a number of contributions focused on preferential flows in catchments and the estimation of mean travel times or mean residence time distributions in catchments.  Both of these pose interesting issues in respect of all three focus areas in the Catchment Change Network – flood generation, water quality and water scarcity. Particularly in the water quality area, the way that they are linked will have an impact over both short and longer time scales.  Despite this importance, our understanding of both preferential flows and travel time distributions is, however, still limited and this got me on to thinking about developing that understanding through predictive models treated as hypotheses about how a catchment system function.

This has some implications about predicting the effects of change since we clearly cannot easily test hypotheses (or sets of modelling assumptions) about what might happen in a particular catchment of interest in the future, we more usually rely on testing hypotheses under current conditions and, given a degree of belief that we are getting the right results for the right reasons, explore the consequences for scenarios of future change. Increasing that degree of belief is the purpose of testing but there are two difficulties involved in this process.  The first is that, as with classical statistical hypotheses testing, there is a possibility of making Type I (false positives, or incorrectly accepting a poor model) or Type II (false negatives, or incorrectly rejecting a good model), particular when there are observational errors in the data being used in testing.  The second is that this process of predicting future change relies on a form of uniformitarianism principle; i.e. that a model that has survived current tests has the functionality required to predict the potentially different future conditions. In both cases, classical hypothesis testing will be limited by epistemic errors (see the previous entry of 27th September) in the observations and in our expectations about future processes.

That does not mean, however, that we should not try to test models as hypotheses, only that new ways of doing so might be required. We could, for example, explore the possibility of using real-world analogues for different scenarios of future conditions with (approximately) the right combinations of expected temperatures, land use and rainfalls to show that, if there are significant differences in processes the predictive model can represent them acceptably.  The analogues would not, of course, be perfect (uniqueness of place suggests that calibrated parameter values would also necessarily reflect other factors) but this might increase the degree of belief in model predictions of future change rather more than relying on a model that has only been shown to reproduce historical conditions at the site of interest.  As far as I know, no such study has been reported (although analogues have been used in other ways)…does any reader know of such a study?

App mix si swimsuit, treasure grab, hotelpal, layers by charles starrett wednesday, july 22, 2009 sports illustrated has www.spying.ninja/how-to-detect-spyware-on-android released si swimsuit 2009, its new application for the iphone and ipod touch

Comments: 0

Leave a Reply

From one meeting to another...
added on 29 09 2009 by Clare Black
This week it was a workshop in Bristol organised by the NERC scoping study on risk and uncertainty in natural hazards (SAPPUR) led by Jonty Rougier of Read more..

This week it was a workshop in Bristol organised by the NERC scoping study on risk and uncertainty in natural hazards (SAPPUR) led by Jonty Rougier of the BRisk Centre at Bristol University. The study is due to report at the end of November, with a summary of the state of the art in different areas of natural hazards and suggestions for a programme of research and training to be funded by NERC. This will have relevance for all three focus areas in CCN, including the specification of trading needs.

It will not be surprising that many of the issues overlap with those that arose in the sessions at Hyderabad (see last entry). The discussions touched on the definition of risk, the assessment of model adequacy, the quantification of hazard and risk, and techniques for the visualisation and communication of uncertainties. There were interesting presentations from David Spiegelhalter on methods used in the medical sciences and Roger Cooke on methods used in the elicitation of expert opinions.

John Rees, the NERC Theme Leader for Natural Hazards, raised the following questions that he felt were important for this scoping study to address:

  • If model uncertainty is needed to better inform policy decisions how is it best quantified?
  • How should alternative conceptual models and evidence contradictions be used in policy and decision making?
  • Is the mean value the appropriate safety metric to inform decisions?
  • What is best way to represent scientific consensus?
  • What are useful mechanisms for integrating risk and uncertainty science into policy development?
  • What should be addressed by the research councils (there is a provisional budget of £1.5m available to support the research programme)?

There was a general recognition amongst the participants, who covered a range of different natural hazards, that the proper evaluation of hazard and risk is often difficult, in that we often have only sparse or no data with which to try and quantify sources of uncertainty and that there may be many different alternative predictive models of varying degrees of approximation. These are the epistemic uncertainties but there was not much discussion about how these might be reflected in the quantification of risk. Many participants seemed to accept that the only way to attempt such a quantification was using statistical methods. I am not so sure.

It is true that any assessment of uncertainty will be conditional on the implicit or explicit assumptions made in the assessment (which might involve treating all sources of uncertainty as if they can be treated statistically). It is also true that those assumptions should be checked for validity in any study (though this is not always evident in publications). But if the fact that the uncertainties are epistemic means that the errors are likely to strong structure and non-stationarity that will depend on a particular model implementation, then it is possible that alternative non-statistical methods of uncertainty estimation might be appropriate.

I have been trying to think about this in the context of testing models as hypotheses given limited uncertain data (something that frequently arises in the focus areas of CCN). Hypothesis testing means considering both Type I and Type 2 errors (accepting false positives and rejecting false negatives). An important areas of CCN is how to avoid both types of errors in model hypothesis testing so that in prediction we are more likely to be getting the right results for the right reasons. So an interesting question is what constitutes an adequate hypothesis test, adequate in the sense of being fit for purpose. This question was addressed, at least indirectly, by Britt Hill of the US Nuclear Regulatory Commission in a talk about the performance assessment process for the safety case for the Yucca Mountain repository site.

In that study, Monte Carlo simulation was used to explore a wide range of potential outcomes (in terms of future dose of radioactivity to a local population over a period of the next 1 million years or so. A cascade of model components from infiltration to waste leaching was involved in these calculations, each depending on multiple (uncertain) model components. The Monte Carlo experiments spanned a range of alternative conceptual models and possible model parameters. Decisions about which models to run appeared to have been produced by scientific consensus, something that Roger Cooke had earlier suggested was not necessarily the best way of extracting information from experts.

There is no explicit hypotheses testing in this type of approach, only some qualitative assessment of whether performance is “reasonably supported” in terms of predictions in past studies, history matching, scientific credibility etc. But it is sometimes the case that, for whatever, reasons, even the best models do not provide acceptably predictions for all times and all places. This could be because of errors in the forcing data, it could be because of model structural error, it could be because of error in the data with which a model is evaluated. It remains impossible to really separate out these different sources of error, and it therefore means that it is difficult to do rigorous hypothesis testing for this type of environmental model.

This seems to be an area where further research is needed. It is surely important in developing guidance for model applications within each of the three CCN focus areas…

Their results so far, the researchers say, suggest that the writing paper within https://paperovernight.com/ answer to both questions might be yes

Comments: 0

Leave a Reply

EEA joins forces with European Water Partnership
added on 28 09 2009 by Clare Black
The European Environment Agency and the European Water Partnership (EWP) announced today a new cooperation plan to improve water use in Europe. The first Read more..

The European Environment Agency and the European Water Partnership (EWP) announced today a new cooperation plan to improve water use in Europe. The first initiatives of the cooperation will be to develop a vision for sustainable water, raise awareness and strengthen information flows. “To be truly effective and relevant, environmental policy must be developed together with the actors who will work with it. For the water area, this means involving those who actually use, distribute and treat water such as agriculture, water utilities, industries, the energy or transport sector. This cooperation with EWP and its partners is a crucial step for us in that direction” said Professor Jacqueline McGlade, Executive Director of the EEA.

Comments: 0

Leave a Reply

Met Office warns of catastrophic global warming in our lifetimes
added on 28 09 2009 by Clare Black
Unchecked global warming could bring a severe temperature rise of 4°C within many people's lifetimes, according to a new report for the British government Read more..

Unchecked global warming could bring a severe temperature rise of 4°C within many people’s lifetimes, according to a new report for the British government that significantly raises the stakes over climate change.

The study, prepared for the Department of Energy and Climate Change by scientists at the Met Office, challenges the assumption that severe warming will be a threat only for future generations, and warns that a catastrophic 4°C rise in temperature could happen by 2060 without strong action on emissions.

“We’ve always talked about these very severe impacts only affecting future generations, but people alive today could live to see a 4°C rise,” said Richard Betts, the head of climate impacts at the Met Office Hadley Centre, who will announce the findings today at a conference at Oxford University. “People will say it’s an extreme scenario, and it is an extreme scenario, but it’s also a plausible scenario.”

Further reading.

Comments: 0

Leave a Reply