Tag Archives: uncertainty

Certainty is unattainable and near-certainty unaffordable

The economists John Kay and Mervyn King assert in their book ‘Radical Uncertainty – decision-making beyond numbers‘ that ‘economic forecasting is necessarily harder than weather forecasting’ because the world of economics is non-stationary whereas the weather is governed by unchanging laws of nature. Kay and King observe that both central banks and meteorological offices have ‘to convey inescapable uncertainty to people who crave unavailable certainty’. In other words, the necessary assumptions and idealisations combined with the inaccuracies of the input data of both economic and meteorological models produce inevitable uncertainty in the predictions. However, people seeking to make decisions based on the predictions want certainty because it is very difficult to make choices when faced with uncertainty – it raises our psychological entropy [see ‘Psychological entropy increased by ineffective leaders‘ on February 10th, 2021].  Engineers face similar difficulties providing systems with inescapable uncertainties to people desiring unavailable certainty in terms of the reliability.  The second law of thermodynamics ensures that perfection is unattainable [see ‘Impossible perfection‘ on June 5th, 2013] and there will always be flaws of some description present in a system [see ‘Scattering electrons reveal dislocations in material structure‘ on November 11th, 2020].  Of course, we can expend more resources to eliminate flaws and increase the reliability of a system but the second law will always limit our success. Consequently, to finish where I started with a quote from Kay and King, ‘certainty is unattainable and the price of near-certainty unaffordable’ in both economics and engineering.

From strain measurements to assessing El Niño events

Figure 11 from RSOS 201086One of the exciting aspects of leading a university research group is that you can never be quite sure where the research is going next.  We published a nice example of this unpredictability last week in Royal Society Open Science in a paper called ‘Transformation of measurement uncertainties into low-dimensional feature space‘ [1].  While the title is an accurate description of the contents, it does not give much away and certainly does not reveal that we proposed a new method for assessing the occurrence of El Niño events.  For some time we have been working with massive datasets of measurements from arrays of sensors and representing them by fitting polynomials in a process known as image decomposition [see ‘Recognising strain‘ on October 28th, 2015]. The relatively small number of coefficients from these polynomials can be collated into a feature vector which facilitates comparison with other datasets [see for example, ‘Out of the valley of death into a hype cycle‘ on February 24th, 2021].  Our recent paper provides a solution to the issue of representing the measurement uncertainty in the same space as the feature vector which is roughly what we set out to do.  We demonstrated our new method for representing the measurement uncertainty by calibrating and validating a computational model of a simple beam in bending using data from an earlier study in a EU-funded project called VANESSA [2] — so no surprises there.  However, then my co-author and PhD student, Antonis Alexiadis went looking for other interesting datasets with which to demonstrate the new method.  He found a set of spatially-varying uncertainties associated with a metamodel of soil moisture in a river basin in China [3] and global oceanographic temperature fields collected monthly over 11 years from 2002 to 2012 [4].  We used the latter set of data to develop a new technique for assessing the occurrence of El-Niño events in the Pacific Ocean.  Our technique is based on global ocean dynamics rather than on the small region in the Pacific Ocean which is usually used and has the added advantages of providing a confidence level on the assessment as well as enabling straightforward comparisons of predictions and measurements.  The comparison of predictions and measurements is a recurring theme in our current research but I did not expect it to lead into ocean dynamics.

Image is Figure 11 from [1] showing convex hulls fitted to the cloud of points representing the uncertainty intervals for the ocean temperature measurements for each month in 2002 using only the three most significant principal components . The lack of overlap between hulls can be interpreted as implying a significant difference in the temperature between months.

References:

[1] Alexiadis, A. and Ferson, S. and  Patterson, E.A., , 2021. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society Open Science, 8(3): 201086.

[2] Lampeas G, Pasialis V, Lin X, Patterson EA. 2015.  On the validation of solid mechanics models using optical measurements and data decomposition. Simulation Modelling Practice and Theory 52, 92-107.

[3] Kang J, Jin R, Li X, Zhang Y. 2017, Block Kriging with measurement errors: a case study of the spatial prediction of soil moisture in the middle reaches of Heihe River Basin. IEEE Geoscience and Remote Sensing Letters, 14, 87-91.

[4] Gaillard F, Reynaud T, Thierry V, Kolodziejczyk N, von Schuckmann K. 2016. In situ-based reanalysis of the global ocean temperature and salinity with ISAS: variability of the heat content and steric height. J. Climate. 29, 1305-1323.

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Forecasts and chimpanzees throwing darts

During the coronavirus pandemic, politicians have taken to telling us that their decisions are based on the advice of their experts while the news media have bombarded us with predictions from experts.  Perhaps not unexpectedly, with the benefit of hindsight, many of these decisions and predictions appear to be have been ill-advised or inaccurate which is likely to lead to a loss of trust in both politicians and experts.  However, this is unsurprising and the reliability of experts, particularly those willing to make public pronouncements, is well-known to be dubious.  Professor Philip E. Tetlock of the University of Pennsylvania has assessed the accuracy of forecasts made by purported experts over two decades and found that they were little better than a chimpanzee throwing darts.  However, the more well-known experts seemed to be worse at forecasting [Tetlock & Gardner, 2016].  In other words, we should assign less credibility to those experts whose advice is more frequently sought by politicians or quoted in the media.  Tetlock’s research has found that the best forecasters are better at inductive reasoning, pattern detection, cognitive flexibility and open-mindedness [Mellers et al, 2015]. People with these attributes will tend not to express unambiguous opinions but instead will attempt to balance all factors in reaching a view that embraces many uncertainties.  Politicians and the media believe that we want to hear a simple message unadorned by the complications of describing reality; and, hence they avoid the best forecasters and prefer those that provide the clear but usually inaccurate message.  Perhaps that’s why engineers are rarely interviewed by the media or quoted in the press because they tend to be good at inductive reasoning, pattern detection, cognitive flexibility and are open-minded [see ‘Einstein and public engagement‘ on August 8th, 2018].  Of course, this was well-known to the Chinese philosopher, Lao Tzu who is reported to have said: ‘Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.’

References:

Mellers, B., Stone, E., Atanasov, P., Rohrbaugh, N., Metz, S.E., Ungar, L., Bishop, M.M., Horowitz, M., Merkle, E. and Tetlock, P., 2015. The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of experimental psychology: applied, 21(1):1-14.

Tetlock, P.E. and Gardner, D., 2016. Superforecasting: The art and science of prediction. London: Penguin Random House.