Tag Archives: model validation

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

35 years later and still working on a PhD thesis

It is about 35 years since I graduated with my PhD.  It was not ground-breaking although, together with my supervisor, I did publish about half a dozen technical papers based on it and some of those papers are still being cited, including one this month which surprises me.  I performed experiments and computer modelling on the load and stress distribution in threaded fasteners, or nuts and bolts.  There were no digital cameras and no computer tomography; so, the experiments involved making and sectioning models of nuts and bolts in transparent plastic using three-dimensional photoelasticity [see ‘Art and Experimental Mechanics‘ on July 17th, 2012].  I took hundreds of photographs of the sections and scanned the negatives in a microdensitometer.  The computer modelling was equally slow and laborious because there were no graphical user interfaces (GUI); instead, I had to type strings of numbers into a terminal, wait overnight while the calculations were performed, and then study reams of numbers printed out on long rolls of paper.  The tedium of the experimental work inspired me to work on utilising digital technology to revolutionise the field of experimental mechanics over the following 15 to 20 years.  In the past 15 to 20 years, I have moved back towards computer modelling and focused on transforming the way in which measurement data are used to improve the fidelity of computer models and to establish confidence in their predictions [see ‘Establishing fidelity and credibility in tests and simulations‘ on July 25th, 2018].  Since completing my PhD, I have supervised 32 students to successful completion of their PhDs.  You might think that was a straightforward process of an initial three years for the first one to complete their research and write their thesis, followed by one graduating every year.  But that is not how it worked out, instead I have had fallow years as well as productive years.  At the moment, I am in a productive period, having graduated two PhD students per year since 2017 – that’s a lot of reading and I have spent much of the last two weekends reviewing a thesis which is why PhD theses are the topic of this post!

Footnote: the most cited paper from my thesis is ‘Kenny B, Patterson EA. Load and stress distribution in screw threads. Experimental Mechanics. 1985 Sep 1;25(3):208-13‘ and this month it was cited by ‘Zhang D, Wang G, Huang F, Zhang K. Load-transferring mechanism and calculation theory along engaged threads of high-strength bolts under axial tension. Journal of Constructional Steel Research. 2020 Sep 1;172:106153‘.

The blind leading the blind

Three years after it started, the MOTIVATE project has come to an end [see ‘Getting smarter’ on June 21st, 2017].  The focus of the project has been about improving the quality of validation for predictions of structural behaviour in aircraft using fewer, better physical tests.  We have developed an enhanced flowchart for model validation [see ‘Spontaneously MOTIVATEd’ on June 27th, 2018], a method for quantifying uncertainty in measurements of deformation in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] and a toolbox for quantifying the extent to which predictions from computational models represent measurements made in the real-world [see ‘Alleviating industrial uncertainty’ on May 13th, 2020].  In the last phase of the project, we demonstrated all of these innovations on the fuselage nose section of an aircraft.  The region of interest was the fuselage skin behind the cockpit window for which the out-of-plane displacements resulting from an internal pressurisation load were predicted using a finite element model [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  The computational model was provided by Airbus and is shown on the left in the top graphic with the predictions for the region of interest on the right.  We used a stereoscopic imaging system  to record images of a speckle pattern on the fuselage before and after pressurization; and from these images, we evaluated the out-of-plane displacements using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC].  The bottom graphic shows the measurements being made with assistance from an Airbus contractor, Strain Solutions Limited.  We compared the predictions quantitatively against the measurements in a double-blind process which meant that the modellers and experimenters had no access to one another’s results.  The predictions were made by one MOTIVATE partner, Athena Research Centre; the measurements were made by another partner, Dantec Dynamics GmbH supported by Strain Solutions Limited; and the quantitative comparison was made by the project coordinator, the University of Liverpool.  We found that the level of agreement between the predictions and measurements changed with the level of pressurisation; however, the main outcome was the demonstration that it was possible to perform a double-blind validation process to quantify the extent to which the predictions represented the real-world behaviour for a full-scale aerospace structure.

The content of this post is taken from a paper that was to be given at a conference later this summer; however, the conference has been postponed due to the pandemic.  The details of the paper are: Patterson EA, Diamantakos I, Dvurecenska K, Greene RJ, Hack E, Lampeas G, Lomnitz M & Siebert T, Application of a model validation protocol to an aircraft cockpit panel, submitted to the International Conference on Advances in Experimental Mechanics to be held in Oxford in September 2021.  I would like to thank the authors for permission to write about the results in this post and Linden Harris of Airbus SAS for enabling the study and to him and Eszter Szigeti for providing technical advice.

For more on the validation flowchart see: Hack E, Burguete R, Dvurecenska K, Lampeas G, Patterson E, Siebert T & Szigeti, Steps towards industrial validation experiments, In Multidisciplinary Digital Publishing Institute Proceedings (Vol. 2, No. 8, p. 391) https://www.mdpi.com/2504-3900/2/8/391

For more posts on the MOTIVATE project: https://realizeengineering.blog/category/myresearch/motivate-project/

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Alleviating industrial uncertainty

Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions?  The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment.  Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage.  Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading.  The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs.  The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row).  The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition].  The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case.  If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph.  However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable?  One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines.  Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements.  If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment.  The forms of these outlying shape descriptors are shown as insets on the plots.  However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are.  To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.

These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts.  Please get in touch if you want more information in order to test these tools yourself.

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.