Tag Archives: knowledge

The rest of the planet has been waiting patiently for us to figure it out

Research in British Columbia has found evidence of nitrogen from fish in tree rings.  The salmon that swim in the local rivers provide food for predators, such as bears and eagles, who leave the remains of the salmon lying around on the floor of the forest where it decomposes allowing the trees to absorb the nitrogen embedded in the bones of the salmon.  In some cases, up to three-quarters of a tree’s nitrogen is from salmon.  This implies that interfering in the life cycle of the salmon, for instance by commercial fishing, will impact on its predators, the forest and everything that is dependent on or interacts with the trees.  The complex nature of these interconnections have been apparent to the aboriginal peoples of the world for a very long time [see ‘Blinded by reductionism‘ on August 24th, 2022].  To quote Suzanne Simard, ‘Mistreatment of one species is mistreatment of all.  The rest of the planet has been waiting patiently for us to figure that out’.

Source: Suzanne Simard, Finding the Mother Tree, Penguin, 2021.

Image: photograph of an original painting bought by the author in Beijing

Intelligent openness

Photo credit: Tom

As an engineer and an academic, my opinion as an expert is sought often informally but less frequently formally, perhaps because I am reluctant to offer the certainty and precision that is so often expected of experts and instead I tend to highlight the options and uncertainties [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd 2020].  These options and uncertainties will likely change as more information and knowledge becomes available.  An expert, who changes their mind and cannot offer certainty and precision, tends not to be welcomed by society, and in particular the media, who want simple statements and explanations.  One problem with offering certainty and precision as an expert is that it might appear you are part of a technocratic subset seeking to impose their values on the rest of society, as Mary O’Brien has argued.  The philosopher Douglas Walton has suggested that it is improper for experts to proffer their opinion when there is a naked assertion that the expert’s identity warrants acceptance of their opinion or argument.  Both O’Brien and Walton have argued that expert authority is legitimate only when it can be challenged, which is akin to Popper’s approach to the falsification of scientific theories – if it is not refutable then it is not science.  An expert’s authority should be acceptable only when it can be challenged and Onora O’Neill has argued that trustworthiness requires intelligent openness.  Intelligent openness means that the information being used by the expert is accessible and useable; the expert’s decision or argument is understandable (clearly explained in plain language) and assessable by someone with the time, expertise and access to the detail so that they can attempt to refute the expert’s statements.  In other words, experts need to be  transparent and science needs to be an open enterprise.

Sources:

Burgman MA, Trusting judgements: how to get the best out of experts, Cambridge: Cambridge University Press, 2016.

Harford T, How to make the world add up: 10 rules for thinking differently about numbers, London: Bridge Street Press, 2020.

O’Brien M, Making better environmental decisions: an alternative to risk assessment, Cambridge MA: MIT Press, 2000.

Walton D, Appeal to expert opinion: arguments from authority, University Park PA: Pennsylvania State University Press, 1997.

Royal Society, Science as an open enterprise, 2012: https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/

Do you know RIO?

Infrared image of group of people in meetingDuring the pandemic many political leaders have been heard to justify their decisions by telling us that they were following advice from scientists.  I think it was Thomas Kuhn who proposed that the views of a group of scientists will be normally distributed if the group is large enough, i.e., a bell-shaped curve with a few scientists providing outlying opinions on either end and the majority in the middle of the distribution [see ‘Uncertainty about Bayesian methods’ on June 7th, 2017].  So, it depends which scientist you consult as to what advice you will receive.  Of course, you can consult a group of experts in order to identify the full range of advice and seek a consensus; however, this is notoriously difficult because some voices will be louder than others and some experts will be very certain about their predictions of the future while others will be very cautious about predicting anything.  This is often because the former group are suffering from meta-ignorance, i.e., failing to even consider the possibility of being wrong, while the latter are so aware of the ontological or deep uncertainties that they prefer to surround their statements with caveats that render them difficult or impossible to interpret or employ in decision-making [see ‘Deep uncertainty and meta ignorance’ on July 21st 2021].  Politicians prefer a simple message that they can explain to the media and tend to listen to the clear but usually inaccurate message from the confident forecasters [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd, 2020].  However, with time and effort, it is possible to make rational decisions based on expert opinion even when the opinions appear to diverge.  There are several recognised protocols for expert elicitation which are used in a wide range of engineering and scientific activities to support decision-making in the absence of comprehensive information.  I frequently use a form of the Sheffield protocol developed originally to elicit a probability distribution for an unknown uncertainty from a group of experts.  Initially, the group of experts are asked individually to provide private, written, independent advice on the issue of concern.  Subsequently, their advice is shared with the group and a discussion to reach a consensus is led by a facilitator. This can be difficult if the initial advice is divergent and individuals hold strong views.  This is when RIO can help.  RIO stands for Rational Impartial Observer and an expert group often rapidly reach a consensus when they are asked to consider what RIO might reasonably believe after reading their independent advice and listening to their discussion.

Source:

Anthony O’Hagan, Expert knowledge elicitation: subjective but scientific, The American Statistician, 73:Sup.1, 69-81, 2019.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.