Tag Archives: knowledge

Intelligent openness

Photo credit: Tom

As an engineer and an academic, my opinion as an expert is sought often informally but less frequently formally, perhaps because I am reluctant to offer the certainty and precision that is so often expected of experts and instead I tend to highlight the options and uncertainties [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd 2020].  These options and uncertainties will likely change as more information and knowledge becomes available.  An expert, who changes their mind and cannot offer certainty and precision, tends not to be welcomed by society, and in particular the media, who want simple statements and explanations.  One problem with offering certainty and precision as an expert is that it might appear you are part of a technocratic subset seeking to impose their values on the rest of society, as Mary O’Brien has argued.  The philosopher Douglas Walton has suggested that it is improper for experts to proffer their opinion when there is a naked assertion that the expert’s identity warrants acceptance of their opinion or argument.  Both O’Brien and Walton have argued that expert authority is legitimate only when it can be challenged, which is akin to Popper’s approach to the falsification of scientific theories – if it is not refutable then it is not science.  An expert’s authority should be acceptable only when it can be challenged and Onora O’Neill has argued that trustworthiness requires intelligent openness.  Intelligent openness means that the information being used by the expert is accessible and useable; the expert’s decision or argument is understandable (clearly explained in plain language) and assessable by someone with the time, expertise and access to the detail so that they can attempt to refute the expert’s statements.  In other words, experts need to be  transparent and science needs to be an open enterprise.

Sources:

Burgman MA, Trusting judgements: how to get the best out of experts, Cambridge: Cambridge University Press, 2016.

Harford T, How to make the world add up: 10 rules for thinking differently about numbers, London: Bridge Street Press, 2020.

O’Brien M, Making better environmental decisions: an alternative to risk assessment, Cambridge MA: MIT Press, 2000.

Walton D, Appeal to expert opinion: arguments from authority, University Park PA: Pennsylvania State University Press, 1997.

Royal Society, Science as an open enterprise, 2012: https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/

Do you know RIO?

Infrared image of group of people in meetingDuring the pandemic many political leaders have been heard to justify their decisions by telling us that they were following advice from scientists.  I think it was Thomas Kuhn who proposed that the views of a group of scientists will be normally distributed if the group is large enough, i.e., a bell-shaped curve with a few scientists providing outlying opinions on either end and the majority in the middle of the distribution [see ‘Uncertainty about Bayesian methods’ on June 7th, 2017].  So, it depends which scientist you consult as to what advice you will receive.  Of course, you can consult a group of experts in order to identify the full range of advice and seek a consensus; however, this is notoriously difficult because some voices will be louder than others and some experts will be very certain about their predictions of the future while others will be very cautious about predicting anything.  This is often because the former group are suffering from meta-ignorance, i.e., failing to even consider the possibility of being wrong, while the latter are so aware of the ontological or deep uncertainties that they prefer to surround their statements with caveats that render them difficult or impossible to interpret or employ in decision-making [see ‘Deep uncertainty and meta ignorance’ on July 21st 2021].  Politicians prefer a simple message that they can explain to the media and tend to listen to the clear but usually inaccurate message from the confident forecasters [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd, 2020].  However, with time and effort, it is possible to make rational decisions based on expert opinion even when the opinions appear to diverge.  There are several recognised protocols for expert elicitation which are used in a wide range of engineering and scientific activities to support decision-making in the absence of comprehensive information.  I frequently use a form of the Sheffield protocol developed originally to elicit a probability distribution for an unknown uncertainty from a group of experts.  Initially, the group of experts are asked individually to provide private, written, independent advice on the issue of concern.  Subsequently, their advice is shared with the group and a discussion to reach a consensus is led by a facilitator. This can be difficult if the initial advice is divergent and individuals hold strong views.  This is when RIO can help.  RIO stands for Rational Impartial Observer and an expert group often rapidly reach a consensus when they are asked to consider what RIO might reasonably believe after reading their independent advice and listening to their discussion.

Source:

Anthony O’Hagan, Expert knowledge elicitation: subjective but scientific, The American Statistician, 73:Sup.1, 69-81, 2019.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.

Negative capability and optimal ambiguity

Decorative photograph of sculpture on Liverpool waterfront at nightHow is your negative capability?  The very term ‘negative capability’ conveys confusion and ambiguity.  It means our ability to accept uncertainty, a lack of knowledge or control.  It was coined by John Keats to describe the skill of appreciating something without fully understanding it.  It implies suspending judgment about something in order to learn more about it.  This is difficult because we have to move out of a low entropy mindset and consider how it fits in a range of possible mindsets or neuronal assemblies, which raises our psychological entropy and with it our anxiety and mental stress [see ’Psychological entropy increased by effectual leaders‘ on February 10th, 2021].  If we are able to tolerate an optimal level of ambiguity and uncertainty then we might be able to develop an appreciation of a complex system and even an ability to anticipate its behaviour without a full knowledge or understanding of it.  Our sub-conscious brain has excellent negative capabilities; for example, most of us can catch a ball without understanding, or even knowing, anything about the mechanics of its flight towards us, or we accept a ride home from a friend with no knowledge of their driving skills and no control over the vehicle.  Although, if our conscious brain knows that they crashed their car last week then it might override the sub-conscious and cause us to think again before declining the offer of a ride home.  Perhaps this is because our conscious brain tends to have less negative capability and likes to be in control.  Engineers like to talk about their intuition which is probably synonymous with their negative capability because it is their ability to appreciate and anticipate the behaviour of an engineering system without a full knowledge and understanding of it.  This intuition is usually based on experience and perhaps resides in the subconscious mind because if you ask an engineer to explain a decision or prediction based on their intuition then they will probably struggle to provide a complete and rational explanation.  They are comfortable with an optimal level of ambiguity although of course you might not be so comfortable.

Sources:

Richard Gunderman, ‘John Keats’ concept of ‘negative capability’ – or sitting in uncertainty –  is needed now more than ever’.  The Conversation, February 21st, 2021.

David Jeffery, Letter: Keats was uneasy about the pursuit of perfection.  FT Weekend, April 2nd, 2021.

Caputo JD. Truth: philosophy in transit. London: Penguin, 2013.