Deliberating risk and uncertainty in Cambridge

It seems that trying to provide estimates of the uncertainty associated with predictions of catchment change may not be enough as an input to the decision making process.  At a meeting this week on Challenging Models in the Face of Uncertainty, organized by the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) in Cambridge, it was suggested in a succession of talks that all such outcomes are culturally conditioned – both in the assumptions that are made and in the interpretation of the results.  These will depend, following the Cultural Theory grid/group characterisations of Mary Douglas, on whether you are a combination of hierarchist or egalitarian, individualist or collectivist.   I have a beard so I must be a egalitarian collectivist.  Different groups will frame decisions in quite different ways and since it is impossible to be totally rationalist about the application of science to real world problems (in part because of the incomplete knowledge and epistemic uncertainties discussed in earlier blogs), there is plenty of scope for doing so. 

The issue of climate change underlay a lot of the discussions, but many other policy areas were used as examples, from badger culls to GM crops and security against terrorism.   A particular issue in respect of climate change resonating with the blog after Hydropredict2010 was the choice of discounting rate in cost-benefit analysis.   We do not know what future discounting rates should be but different assumptions might make a lot of difference to the ranking of options.   It is also easy to see how the choice of parameters in any model of catchment change might also be influenced by desired outcomes.   Steve Raynor (Oxford) gave the example of the Chesapeake Bay study.   He suggested that the (highly complex) model of the system and its catchment areas shows a steady increase in water quality in the bay as a result of the improvement measures that have been taken.  He also suggested, however, that the observations gave no such evidence of any increase in quality.  The modeled improvements were, however, politically expedient in getting continued Congressional funding for the project from year to year.

So there was much discussion about how to avoid the conflicts between world views that this post-normal framing of scientific rationality implies.   Suggestions in different presentations ranged from opening up the dialogue to a plurality of views in a deliberative discourse to invoking a context of common interest in defending national security or interest.

A model and its predictions might be just one element in a deliberative discourse between different groups who might choose to use it (or equivalent models) in different ways.  While no model could possibly capture the full complexity of a catchment system (which might itself provide an element of surprise in its responses to forcings that have not been observed before), to my mind that really does not mean that the modeler should not be as rational as possible in providing advice to decision makers.  We should always, of course, reflect on the fact that this rationality is conditional; it depends on the particular perceptual models of those who use it, models which have been shaped by histories particular to individuals, and the particular observations available for evaluation.   There is no common agreement about how catchments should be modeled, only a wide range of software packages that might be used for different purposes.

So being rational then comes down to using models that, as far as possible, get the right results for the right reasons.   This is a valid endeavor in itself, even in a post-normal science world with culturally conditioned assumptions.   There remain many issues about how to test whether we can get the right results for the right reasons (see, for example, Beven, 2010) and in communicating the assumptions and limitations of the model predictions to users, but this is the nature of trying to do science properly.   An uncertainty analysis can then provide a framework for both testing models as hypotheses and being explicit about assumptions.   I have the impression that some sociologists of science would see this as only another means of the scientist trying to establish power and authority in shaping policy (with some implication that this authority might not be justified).  To me, it is simply an exercise in being as scientifically honest as possible.  How the resulting predictions might then get used in a (more or less) inclusive deliberative decision process is a quite different issue but what we should not do is to conceal the limitations and uncertainties of model predictions.  That would not be a good long term strategy.   As Andy Stirling (Sussex) put it:  there is widespread empathy for humility in the role that science can legitimately play in decision making.

Reference:

Beven, K J, 2010, Preferential flows and travel time distributions: defining adequate hypothesis tests for hydrological process models, Hydrol. Process. 24: 1537-1547 Com/marina zlochin and we cannot simply assume that english will remain the world’s dominant language

Comments are closed.