Both SAFE and CURE were developed under the NERC CREDIBLE project, and have a common form of commented MATLAB workflows to help the user to
understand how each workflow is carried out. There are elements in common to both sensitivity analysis (SAFE) and uncertainty estimation (CURE)
such is the generation of random number sequences from different underlying distributions. SAFE adds some tools for analysis of the sensitivity
of model outputs to different factors. This is very similar to the output variability in a Forward Uncertainty Analysis in CURE, that only
propagates the prior variability of these factors to the model output. CURE can also add an additional step, however, of conditioning the
uncertainty on observations about the behaviour of the real system by specifying a likelihood measure within either GLUE or Bayes Statistical
frameworks.
◈ WHY ARE THERE SO MANY UNCERTAINTY ESTIMATION METHODS?
This is a problem that is discussed in Chapter 2 of Environmental Modelling – An Uncertain Future? It arises because any uncertainty estimation
methodology requires many assumptions, many of which are difficult to define given current knowledge. This can apply to input uncertainty, model
parameter uncertainty, observational uncertainty and how to define an information or likelihood measure. All of these are subject to what are
called epistemic uncertainties arising from lack of knowledge, as well as the aleatory uncertainties that arise from random variability. This lack
of knowledge leaves plenty of scope for the choice of different approaches, some of which are included as CURE workflows.
◈ HOW DO I CHOOSE AN UNCERTAINTY ESTIMATION METHOD?
We have provided a
flow chart
to help users to decide on what uncertainty estimation method to use.
◈ WHY CHOOSE TO USE GLUE WITH ITS SUBJECTIVE CHOICE OF LIKELIHOOD MEASURE WHEN METHODS BASED ON STATISTICAL THEORY ARE AVAILABLE?
If the variables in a model application can be well defined in terms of aleatory variability, and the model residuals show a simple statistical
structure, then the full power of statistical theory can be used in uncertainty estimation. This is often not the case, however, and methods such
as GLUE that allow a more subjective approach might be more appropriate. In particular GLUE can be used with specified Limits of Acceptability,
either for single observational points, or for complete time or space series. Outside these limits the likelihood will be set to zero and those
model runs rejected. This does not happen with statistical likelihoods which will only get very small, regardless of how bad a model fit is. Any
model rejection then requires some additional condition to be imposed. Some discussion of the issues can be found in:
Beven (2016) and in Chapter 2 of Environmental Modelling – An Uncertain Future?
◈ WHY DO THE DIFFERENT UNCERTAINTY ESTIMATION METHODS GIVE SIGNIFICANTLY DIFFERENT UNCERTAINTY RANGES?
There is, of course, no right answer—precisely because there are multiple sources of epistemic uncertainty, including model structural uncertainty,
that are impossible to separate. There are also different frameworks for assessing uncertainties and different ways of formulating likelihoods. If we had
knowledge of the true nature of the sources of uncertainty then they would not be epistemic and we might then be more confident about using formal statistical
theory to deal with all the sources of unpredictability. Some epistemic uncertainties should be reducible by further experimentation or observation, so that
there is an expectation that we might move towards more aleatory residual error in the future. In hydrology, however, this still seems a long way off, particularly
with respect to the hydrological properties of the subsurface. And if, of course, there is no right answer, then this leaves plenty of scope for different
philosophical and technical approaches for uncertainty estimation. In this situation there is a lot of uncertainty about uncertainty estimation, and this is
likely to be the case for the foreseeable future. This has the consequence that communication of the meaning of different estimates of uncertainty can be difficult.
This should not, however, be an excuse for not being quite clear about the assumptions that are made in producing a particular uncertainty estimate This is why
completing the condition tree of assumptions to produce an audit trail for users becomes so important.
◈ WHY SHOULD I USE THE AUDIT TRAIL GRAPHICAL USER INTERFACE FOR RECORDING ASSUMPTIONS IN CURE?
Because of the difficulty of defining some of the inputs to an application of uncertainty estimation (e.g. input uncertainties, prior model parameter distributions,
observational uncertainties and likelihood measures), especially where it is expected that there are important epistemic uncertainties, then there will be a wide range
of possible assumptions that could be made. By prompting the user to provide information on the assumptions made at each stage in a workflow it is possible to record
an audit trail unique to that particular analysis (see Workflow 10 of CURE for an example). The audit trail can then be examined by users of the analysis to help communicate
what has been done, and to query some of the assumptions if they do not match expectations by the user. It is then relatively easy to change the assumptions and rerun the
necessary simulations to evaluate the impact. We believe that this is an important mechanism for encouraging more thoughtful use and justification of assumptions in these
types of analyses that are limited by lack of knowledge.