Tag Archives: Karl Popper

Intelligent openness

Photo credit: Tom

As an engineer and an academic, my opinion as an expert is sought often informally but less frequently formally, perhaps because I am reluctant to offer the certainty and precision that is so often expected of experts and instead I tend to highlight the options and uncertainties [see ‘Forecasts and chimpanzees throwing darts’ on September 2nd 2020].  These options and uncertainties will likely change as more information and knowledge becomes available.  An expert, who changes their mind and cannot offer certainty and precision, tends not to be welcomed by society, and in particular the media, who want simple statements and explanations.  One problem with offering certainty and precision as an expert is that it might appear you are part of a technocratic subset seeking to impose their values on the rest of society, as Mary O’Brien has argued.  The philosopher Douglas Walton has suggested that it is improper for experts to proffer their opinion when there is a naked assertion that the expert’s identity warrants acceptance of their opinion or argument.  Both O’Brien and Walton have argued that expert authority is legitimate only when it can be challenged, which is akin to Popper’s approach to the falsification of scientific theories – if it is not refutable then it is not science.  An expert’s authority should be acceptable only when it can be challenged and Onora O’Neill has argued that trustworthiness requires intelligent openness.  Intelligent openness means that the information being used by the expert is accessible and useable; the expert’s decision or argument is understandable (clearly explained in plain language) and assessable by someone with the time, expertise and access to the detail so that they can attempt to refute the expert’s statements.  In other words, experts need to be  transparent and science needs to be an open enterprise.

Sources:

Burgman MA, Trusting judgements: how to get the best out of experts, Cambridge: Cambridge University Press, 2016.

Harford T, How to make the world add up: 10 rules for thinking differently about numbers, London: Bridge Street Press, 2020.

O’Brien M, Making better environmental decisions: an alternative to risk assessment, Cambridge MA: MIT Press, 2000.

Walton D, Appeal to expert opinion: arguments from authority, University Park PA: Pennsylvania State University Press, 1997.

Royal Society, Science as an open enterprise, 2012: https://royalsociety.org/topics-policy/projects/science-public-enterprise/report/

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.

Reasons for publishing scientific papers

A few months ago I wrote about how we are drowning in information as a result of the two million papers published in journals every year [see ‘We are drowning in information while starving for wisdom‘ on January 20th, 2021]. As someone who has published about 10 papers each year for the last couple of decades, including three this year already, I feel I should provide some explanation for continuing to contribute to the deluge of papers. I think there are four main reasons for publishing scientific papers. First, to report a discovery – a new contribution to knowledge or understanding.  This is the primary requirement for publication in a scientific journal but the significance of the contribution is frequently diminished both by the publisher’s and author’s need to publish which leads to many papers in which it is hard to identify the original contribution. The second reason is to fulfil the expectations or requirements of a funding agency (including your employer); I think this was probably the prime driver for my first paper which reported the results of a survey of muskoxen in Greenland conducted during an expedition in 1982. The third reason is to support a promotion case, either your own or one of your co-authors; of course, this is not incompatible with the reporting original contributions to knowledge but it can be a driver towards small contributions, especially when promotion committees consider only the quantity and not the quality of published papers. The fourth reason is to support the careers of members of the research team; in some universities it is impossible to graduate with a PhD degree in science and engineering without publishing a couple of papers, although most supervisors encourage PhD students to publish their work in at least one paper before submitting their PhD thesis, even when it is not compulsory. Post-doctoral researchers have a less urgent need to publish unless they are planning an academic career in which case they will need a more impressive publication record than their competitors. Profit is the prime reason for most publishers to publish papers.  Publishers make more money when they sell more journals with more papers in them which drives the launch of new journals and the filling of journals with more papers; this process is poorly moderated by the need to ensure the papers are worth reading.  It might be an urban myth, but some studies have suggested that half of published papers are read only by their editor and authors.  Thirty years ago, my PhD supervisor, who was also my mentor during my early career as an academic, already suspected this lack of readers and used to greet the news of the publication of each of my papers as ‘more stuffing for your chair’.

Source:

Patterson, E.A., 1984, ‘Sightings of Muskoxen in Northern Scoresby Land, Greenland’, Arctic, 37(1): 61-63

Rose Eveleth, Academics write papers arguing over how many people read (and cite) their papers, Smithsonian Magazine, March 25th, 2014.

Image: Hannes Grobe, AWI, CC BY-SA 2.5 <https://creativecommons.org/licenses/by-sa/2.5&gt;, via Wikimedia Commons.