Tag Archives: ontological uncertainty

Do you know RIO?

Infrared image of group of people in meetingDuring the pandemic many political leaders have been heard to justify their decisions by telling us that they were following advice from scientists.  I think it was Thomas Kuhn who proposed that the views of a group of scientists will be normally distributed if the group is large enough, i.e., a bell-shaped curve with a few scientists providing outlying opinions on either end and the majority in the middle of the distribution [see ‘Uncertainty about Bayesian methods’ on June 7th, 2017].  So, it depends which scientist you consult as to what advice you will receive.  Of course, you can consult a group of experts in order to identify the full range of advice and seek a consensus; however, this is notoriously difficult because some voices will be louder than others and some experts will be very certain about their predictions of the future while others will be very cautious about predicting anything.  This is often because the former group are suffering from meta-ignorance, i.e., failing to even consider the possibility of being wrong, while the latter are so aware of the ontological or deep uncertainties that they prefer to surround their statements with caveats that render them difficult or impossible to interpret or employ in decision-making [see ‘Deep uncertainty and meta ignorance’ on July 21st 2021].  Politicians prefer a simple message that they can explain to the media and tend to listen to the clear but usually inaccurate message from the confident forecasters [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd, 2020].  However, with time and effort, it is possible to make rational decisions based on expert opinion even when the opinions appear to diverge.  There are several recognised protocols for expert elicitation which are used in a wide range of engineering and scientific activities to support decision-making in the absence of comprehensive information.  I frequently use a form of the Sheffield protocol developed originally to elicit a probability distribution for an unknown uncertainty from a group of experts.  Initially, the group of experts are asked individually to provide private, written, independent advice on the issue of concern.  Subsequently, their advice is shared with the group and a discussion to reach a consensus is led by a facilitator. This can be difficult if the initial advice is divergent and individuals hold strong views.  This is when RIO can help.  RIO stands for Rational Impartial Observer and an expert group often rapidly reach a consensus when they are asked to consider what RIO might reasonably believe after reading their independent advice and listening to their discussion.

Source:

Anthony O’Hagan, Expert knowledge elicitation: subjective but scientific, The American Statistician, 73:Sup.1, 69-81, 2019.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.