Tag Archives: predictions

Predicting the future through holistic awareness

Decorative picture: view along hotel corridorIt is traditional at the start of the year to speculate on what will happen in the new year.  However, as Niels Bohr is reputed to have said ‘Prediction is very difficult, especially about the future’.  Some people have suggested that our brains are constantly predicting the future. We weigh up the options for what might happen next before choosing a course of action. Our ancestors might have watched a fish swimming near a river bank and predicted where it would be a moment later when their spear entered the water. Or on a longer timescale, they predicted that seeds planted at a particular time of year would yield a crop some months later. Our predictions are not always correct but our life depends on enough of them being reliable that we have evolved to be good predictors of the immediate future. In Chinese thought, a distinction is made between predicting the near and distant future because the former is possible and latter is impossible, at least with any degree of confidence (Simandon, 2018).  Wisdom can be considered to be understanding the futility of trying to predict the distant future while being able to sense the near future through an acute awareness and immersion in one’s surroundings. This implies that a wise person can go beyond the everyday predictions of the immediate future, made largely unconsciously by our brains, and anticipate events on a slightly longer timescale, the near future. In engineering terms, events in the near future are short-term behaviour dominated by the current status of the system whereas events in the distant future are largely determined by external interactions with the system. This seems entirely consistent with the Chinese concept of wisdom arising from ‘vanishing into things’ which means to become immersed in a situation and hence to be able sense the current status of the system and reliably anticipate the near future. Some engineers might call it intuition which has been defined as ‘judgments that arise through rapid, non-conscious and holistic associations’ (Dane & Pratt, 2007).  So, in 2021 I hope to continue to exercise my intuition and remain immersed in a number of issues but I am not going to attempt to predict any distant events.

References:

Dane, E. and Pratt, M.G., 2007. Exploring intuition and its role in managerial decision making. Academy of management review, 32(1), pp.33-54.

Simandan, D., 2018. Wisdom and foresight in Chinese thought: sensing the immediate future. Journal of Futures Studies, 22(3), pp.35-50.

 

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Forecasts and chimpanzees throwing darts

During the coronavirus pandemic, politicians have taken to telling us that their decisions are based on the advice of their experts while the news media have bombarded us with predictions from experts.  Perhaps not unexpectedly, with the benefit of hindsight, many of these decisions and predictions appear to be have been ill-advised or inaccurate which is likely to lead to a loss of trust in both politicians and experts.  However, this is unsurprising and the reliability of experts, particularly those willing to make public pronouncements, is well-known to be dubious.  Professor Philip E. Tetlock of the University of Pennsylvania has assessed the accuracy of forecasts made by purported experts over two decades and found that they were little better than a chimpanzee throwing darts.  However, the more well-known experts seemed to be worse at forecasting [Tetlock & Gardner, 2016].  In other words, we should assign less credibility to those experts whose advice is more frequently sought by politicians or quoted in the media.  Tetlock’s research has found that the best forecasters are better at inductive reasoning, pattern detection, cognitive flexibility and open-mindedness [Mellers et al, 2015]. People with these attributes will tend not to express unambiguous opinions but instead will attempt to balance all factors in reaching a view that embraces many uncertainties.  Politicians and the media believe that we want to hear a simple message unadorned by the complications of describing reality; and, hence they avoid the best forecasters and prefer those that provide the clear but usually inaccurate message.  Perhaps that’s why engineers are rarely interviewed by the media or quoted in the press because they tend to be good at inductive reasoning, pattern detection, cognitive flexibility and are open-minded [see ‘Einstein and public engagement‘ on August 8th, 2018].  Of course, this was well-known to the Chinese philosopher, Lao Tzu who is reported to have said: ‘Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.’

References:

Mellers, B., Stone, E., Atanasov, P., Rohrbaugh, N., Metz, S.E., Ungar, L., Bishop, M.M., Horowitz, M., Merkle, E. and Tetlock, P., 2015. The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of experimental psychology: applied, 21(1):1-14.

Tetlock, P.E. and Gardner, D., 2016. Superforecasting: The art and science of prediction. London: Penguin Random House.

Alleviating industrial uncertainty

Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions?  The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment.  Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage.  Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading.  The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs.  The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row).  The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition].  The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case.  If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph.  However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable?  One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines.  Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements.  If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment.  The forms of these outlying shape descriptors are shown as insets on the plots.  However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are.  To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.

These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts.  Please get in touch if you want more information in order to test these tools yourself.

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.