Tag Archives: predictions

Slowly crossing the valley of death

A view of a valleyThe valley of death in technology development is well-known amongst research engineers and their sponsors. It is the gap between discovery and application, or between realization of an idea in a laboratory and its implementation in the real-world. Some of my research has made it across the valley of death, for example the poleidoscope about 15 years ago (see ‘Poleidoscope (=polariscope+kaleidoscope)‘ on October 14th, 2020).  Our work on quantitative comparisons of data fields from physical measurements and computer predictions is about three-quarters of the way across the valley.  We published a paper in December (see Dvurecenska et al, 2020) on its application to a large panel from the fuselage of an aircraft based on work we completed as part of the MOTIVATE project.  I reported the application of the research in almost real-time in a post in December 2018 (see ‘Industrial Uncertainty‘ on December 12th, 2018) and in further detail in May 2020 as we submitted the manuscript for publication (‘Alleviating industrial uncertainty‘ on May 13th, 2020).  However, the realization in the laboratory occurred nearly a decade ago when teams from Michigan State University and the University of Liverpool came together in the ADVISE project funded by EU Framework 7 programme (see Wang et al, 2011). Subsequently, the team at Michigan State University moved to the University of Liverpool and in collaboration with researchers at Empa developed the technique that was applied in the MOTIVATE project (see Sebastian et al 2013). The work published in December represents a step into the valley of death; from a university environment into a full-scale test laboratory at Empa using a real piece of aircraft.  The MOTIVATE project involved a further step to a demonstration on an on-going test of a cockpit at Airbus which was also reported in a post last May (see ‘The blind leading the blind‘ on May 27th, 2020).  We are now working with Airbus in a new programme to embed the process of quantitative comparison of fields of measurements and predictions into their routine test procedures for aerospace structures.  So, I would like to think we are climbing out of the valley.

Image: not Death Valley but taken on a road trip in 2008 somewhere between Moab, UT and Kanab, UT while living in Okemos, MI.

Sources:

Dvurecenska, K., Diamantakos, I., Hack, E., Lampeas, G., Patterson, E.A. and Siebert, T., 2020. The validation of a full-field deformation analysis of an aircraft panel: A case study. The Journal of Strain Analysis for Engineering Design, p.0309324720971140.

Sebastian, C., Hack, E. and Patterson, E., 2013. An approach to the validation of computational solid mechanics models for strain analysis. The Journal of Strain Analysis for Engineering Design, 48(1), pp.36-47.

Wang, W., Mottershead, J.E., Sebastian, C.M. and Patterson, E.A., 2011. Shape features and finite element model updating from full-field strain data. International Journal of Solids and Structures, 48(11-12), pp.1644-1657.

For more posts on the MOTIVATE project: https://realizeengineering.blog/category/myresearch/motivate-project/

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Predicting the future through holistic awareness

Decorative picture: view along hotel corridorIt is traditional at the start of the year to speculate on what will happen in the new year.  However, as Niels Bohr is reputed to have said ‘Prediction is very difficult, especially about the future’.  Some people have suggested that our brains are constantly predicting the future. We weigh up the options for what might happen next before choosing a course of action. Our ancestors might have watched a fish swimming near a river bank and predicted where it would be a moment later when their spear entered the water. Or on a longer timescale, they predicted that seeds planted at a particular time of year would yield a crop some months later. Our predictions are not always correct but our life depends on enough of them being reliable that we have evolved to be good predictors of the immediate future. In Chinese thought, a distinction is made between predicting the near and distant future because the former is possible and latter is impossible, at least with any degree of confidence (Simandon, 2018).  Wisdom can be considered to be understanding the futility of trying to predict the distant future while being able to sense the near future through an acute awareness and immersion in one’s surroundings. This implies that a wise person can go beyond the everyday predictions of the immediate future, made largely unconsciously by our brains, and anticipate events on a slightly longer timescale, the near future. In engineering terms, events in the near future are short-term behaviour dominated by the current status of the system whereas events in the distant future are largely determined by external interactions with the system. This seems entirely consistent with the Chinese concept of wisdom arising from ‘vanishing into things’ which means to become immersed in a situation and hence to be able sense the current status of the system and reliably anticipate the near future. Some engineers might call it intuition which has been defined as ‘judgments that arise through rapid, non-conscious and holistic associations’ (Dane & Pratt, 2007).  So, in 2021 I hope to continue to exercise my intuition and remain immersed in a number of issues but I am not going to attempt to predict any distant events.

References:

Dane, E. and Pratt, M.G., 2007. Exploring intuition and its role in managerial decision making. Academy of management review, 32(1), pp.33-54.

Simandan, D., 2018. Wisdom and foresight in Chinese thought: sensing the immediate future. Journal of Futures Studies, 22(3), pp.35-50.

 

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Forecasts and chimpanzees throwing darts

During the coronavirus pandemic, politicians have taken to telling us that their decisions are based on the advice of their experts while the news media have bombarded us with predictions from experts.  Perhaps not unexpectedly, with the benefit of hindsight, many of these decisions and predictions appear to be have been ill-advised or inaccurate which is likely to lead to a loss of trust in both politicians and experts.  However, this is unsurprising and the reliability of experts, particularly those willing to make public pronouncements, is well-known to be dubious.  Professor Philip E. Tetlock of the University of Pennsylvania has assessed the accuracy of forecasts made by purported experts over two decades and found that they were little better than a chimpanzee throwing darts.  However, the more well-known experts seemed to be worse at forecasting [Tetlock & Gardner, 2016].  In other words, we should assign less credibility to those experts whose advice is more frequently sought by politicians or quoted in the media.  Tetlock’s research has found that the best forecasters are better at inductive reasoning, pattern detection, cognitive flexibility and open-mindedness [Mellers et al, 2015]. People with these attributes will tend not to express unambiguous opinions but instead will attempt to balance all factors in reaching a view that embraces many uncertainties.  Politicians and the media believe that we want to hear a simple message unadorned by the complications of describing reality; and, hence they avoid the best forecasters and prefer those that provide the clear but usually inaccurate message.  Perhaps that’s why engineers are rarely interviewed by the media or quoted in the press because they tend to be good at inductive reasoning, pattern detection, cognitive flexibility and are open-minded [see ‘Einstein and public engagement‘ on August 8th, 2018].  Of course, this was well-known to the Chinese philosopher, Lao Tzu who is reported to have said: ‘Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.’

References:

Mellers, B., Stone, E., Atanasov, P., Rohrbaugh, N., Metz, S.E., Ungar, L., Bishop, M.M., Horowitz, M., Merkle, E. and Tetlock, P., 2015. The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of experimental psychology: applied, 21(1):1-14.

Tetlock, P.E. and Gardner, D., 2016. Superforecasting: The art and science of prediction. London: Penguin Random House.