Category Archives: FACTS

The blind leading the blind

Three years after it started, the MOTIVATE project has come to an end [see ‘Getting smarter’ on June 21st, 2017].  The focus of the project has been about improving the quality of validation for predictions of structural behaviour in aircraft using fewer, better physical tests.  We have developed an enhanced flowchart for model validation [see ‘Spontaneously MOTIVATEd’ on June 27th, 2018], a method for quantifying uncertainty in measurements of deformation in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] and a toolbox for quantifying the extent to which predictions from computational models represent measurements made in the real-world [see ‘Alleviating industrial uncertainty’ on May 13th, 2020].  In the last phase of the project, we demonstrated all of these innovations on the fuselage nose section of an aircraft.  The region of interest was the fuselage skin behind the cockpit window for which the out-of-plane displacements resulting from an internal pressurisation load were predicted using a finite element model [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  The computational model was provided by Airbus and is shown on the left in the top graphic with the predictions for the region of interest on the right.  We used a stereoscopic imaging system  to record images of a speckle pattern on the fuselage before and after pressurization; and from these images, we evaluated the out-of-plane displacements using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC].  The bottom graphic shows the measurements being made with assistance from an Airbus contractor, Strain Solutions Limited.  We compared the predictions quantitatively against the measurements in a double-blind process which meant that the modellers and experimenters had no access to one another’s results.  The predictions were made by one MOTIVATE partner, Athena Research Centre; the measurements were made by another partner, Dantec Dynamics GmbH supported by Strain Solutions Limited; and the quantitative comparison was made by the project coordinator, the University of Liverpool.  We found that the level of agreement between the predictions and measurements changed with the level of pressurisation; however, the main outcome was the demonstration that it was possible to perform a double-blind validation process to quantify the extent to which the predictions represented the real-world behaviour for a full-scale aerospace structure.

The content of this post is taken from a paper that was to be given at a conference later this summer; however, the conference has been postponed due to the pandemic.  The details of the paper are: Patterson EA, Diamantakos I, Dvurecenska K, Greene RJ, Hack E, Lampeas G, Lomnitz M & Siebert T, Application of a model validation protocol to an aircraft cockpit panel, submitted to the International Conference on Advances in Experimental Mechanics to be held in Oxford in September 2021.  I would like to thank the authors for permission to write about the results in this post and Linden Harris of Airbus SAS for enabling the study and to him and Eszter Szigeti for providing technical advice.

For more on the validation flowchart see: Hack E, Burguete R, Dvurecenska K, Lampeas G, Patterson E, Siebert T & Szigeti, Steps towards industrial validation experiments, In Multidisciplinary Digital Publishing Institute Proceedings (Vol. 2, No. 8, p. 391) https://www.mdpi.com/2504-3900/2/8/391

For more posts on the MOTIVATE project: https://realizeengineering.blog/category/myresearch/motivate-project/

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Alleviating industrial uncertainty

Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions?  The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment.  Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage.  Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading.  The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs.  The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row).  The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition].  The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case.  If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph.  However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable?  One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines.  Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements.  If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment.  The forms of these outlying shape descriptors are shown as insets on the plots.  However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are.  To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.

These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts.  Please get in touch if you want more information in order to test these tools yourself.

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Same problems in a different language

I spent a lot of time on trains last week.  I left Liverpool on Tuesday evening for Bristol and spent Wednesday at Airbus in Filton discussing the implementation of the technologies being developed in the EU Clean Sky 2 projects MOTIVATE and DIMES.  On Wednesday evening I travelled to Bracknell and on Thursday gave a seminar at Syngenta on model credibility in predictive toxicology before heading home to Liverpool.  But, on Friday I was on the train again, to Manchester this time, to listen to a group of my PhD students presenting their projects to their peers in our new Centre for Doctoral Training called Growing skills for Reliable Economic Energy from Nuclear, or GREEN.  The common thread, besides the train journeys, is the Fidelity And Credibility of Testing and Simulation (FACTS).  My research group is working on how we demonstrate the fidelity of predictions from models, how we establish trust in both predictions from computational models and measurements from experiments that are often also ‘models’ of the real world.  The issues are similar whether we are considering the structural performance of aircraft [as on Wednesday], the impact of agro-chemicals [as on Thursday], or the performance of fusion energy and the impact of a geological disposal site [as on Friday] (see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018) .  The scientific and technical communities associated with each application talk a different language, in the sense that they use different technical jargon and acronyms; and they are surprised and interested to discover that similar problems are being tackled by communities that they rarely think about or encounter.

On the trustworthiness of multi-physics models

I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop.  In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured.  A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements.  The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014].  We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019].  Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments.  It is the fourth in a series of workshops held previously in Shanghai, London and Munich.  For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.