Tag Archives: model validation

Nudging discoveries along the innovation path

Decorative photograph of a Welsh hillThe path from a discovery to a successful innovation is often tortuous and many good ideas fall by the wayside.  I have periodically reported on progress along the path for our novel technique for extracting feature vectors from maps of strain data [see ‘Recognizing strain‘ on October 28th, 2015] and its application to validating models of structures by comparing predicted and measured data [see ‘Million to one‘ on November 21st, 2018], and to tracking damage in composite materials [see ‘Spatio-temporal damage maps‘ on May 6th, 2020] as well as in metallic aircraft structures [see ‘Out of the valley of death into a hype cycle‘ on February 24th 2021].  As industrial case studies, we have deployed the technology for validation of predictions of structural behaviour of a prototype aircraft cockpit [see ‘The blind leading the blind‘ on May 27th, 2020] as part of the MOTIVATE project and for damage detection during a wing test as part of the DIMES project.  As a result of the experience gained in these case studies, we recently published an enhanced version of our technique for extracting feature vectors that allows us to handle data from irregularly shaped objects or data sets with gaps in them [Christian et al, 2021].  Now, as part of the Smarter Testing project [see ‘Jigsaw puzzling without a picture‘ on October 27th, 2021] and in collaboration with Dassault Systemes, we have developed a web-based widget that implements the enhanced technique for extracting feature vectors and compares datasets from computational models and physical models.  The THEON web-based widget is available together with a video demonstration of its use and a user manual.  We supplied some exemplar datasets based on our work in structural mechanics as supplementary material associated with our publication; however, it is applicable across a wide range of fields including earth sciences, as we demonstrated in our recent work on El Niño events [see ‘From strain measurements to assessing El Niño events‘ on March 17th, 2021].  We feel that we have taken some significant steps along the innovation path which will lead to adoption of our technique by a wider community; but only time will tell whether this technology survives or falls by the wayside despite our efforts to keep it on track.

Bibliography

Christian WJR, Dvurecenska K, Amjad K, Pierce J, Przybyla C & Patterson EA, Real-time quantification of damage in structural materials during mechanical testing, Royal Society Open Science, 7:191407, 2020.

Christian WJ, Dean AD, Dvurecenska K, Middleton CA, Patterson EA. Comparing full-field data from structural components with complicated geometries. Royal Society open science. 8(9):210916, 2021

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Middleton CA, Weihrauch M, Christian WJR, Greene RJ & Patterson EA, Detection and tracking of cracks based on thermoelastic stress analysis, R. Soc. Open Sci. 7:200823, 2020.

Wang W, Mottershead JE, Patki A, Patterson EA, Construction of shape features for the representation of full-field displacement/strain data, Applied Mechanics and Materials, 24-25:365-370, 2010.

Diving into three-dimensional fluids

My research group has been working for some years on methods that allow straightforward comparison of large datasets [see ‘Recognizing strain’ on October 28th 2015].  Our original motivation was to compare maps of predicted strain over the surface of engineering structures with maps of measurements.  We have used these comparison methods to validate predictions produced by computational models [see ‘Million to one’ on November 21st 2018] and to identify and track changes in the condition of engineering structures [see ‘Out of the valley of death into a hype cycle’ on February 24th 2021].  Recently, we have extended this second application to tracking changes in the environment including the occurance of El Niño events [see ‘From strain measurements to assessing El Niño events’ on March 17th, 2021].  Now, we are hoping to extend this research into fluid mechanics by using our techniques to compare flow patterns.  We have had some success in exploring the use of methods to optimise the design of the mesh of elements used in computational fluid dynamics to model some simple flow regimes.  We are looking for a PhD student to work on extending our model validation techniques into fluid mechanics using volumes of data from measurement and predictions rather than fields, i.e., moving from two-dimensional to three-dimensional datasets.  If you are interested or know someone who might be interested then please get in touch.

There is more information on the PhD project here.

Jigsaw puzzling without a picture

A350 XWB passes Maximum Wing Bending test

A350 XWB passes Maximum Wing Bending test

Research sometimes feels like putting together a jigsaw puzzle without the picture or being sure you have all of the pieces.  The pieces we are trying to fit together at the moment are (i) image decomposition of strain fields [see ‘Recognising strain’ on October 28th 2015] that allows fields containing millions of data values to be represented by a feature vector with only tens of elements which is useful for comparing maps or fields of predictions from a computational model with measurements made in the real-world; (ii) evaluation of the variation in measurement uncertainty over a field of view of measured displacements or strains in a large structure [see ‘Industrial uncertainty’ on December 12th 2018] which provides information about the quality of the measurements; and (iii) a probabilistic validation metric that provides a measure of how well predictions from a computational model represent measurements made in the real world [see ‘Million to one’ on November 21st 2018].  We have found some of the missing pieces of the jigsaw, for example we have established how to represent the distribution of measurement uncertainty in the feature vector domain [see ‘From strain measurements to assessing El Niño events’ on March 17th 2021] so that it can be used to assess the significance of differences between measurements and predictions represented by their feature vectors – this connects (i) and (ii) together.  Very recently we have demonstrated a generic technique for performing image decomposition of irregularly shaped fields of data or data fields with holes [see Christian et al, 2021] which extends the applicability of our method for comparing measurements and predictions to real-world objects rather than idealised shapes.  This allows (i) to be used in industrial applications but we still have to work out how to connect this to the probabilistic metric in (iii) while also incorporating spatially-varying uncertainty.  These techniques can be used in a wide range of applications, as demonstrated in our recent work on El Niño events [see Alexiadis et al, 2021], because, by treating all fields of data as images, the techniques are agnostic about the source and format of the data.  However, at the moment, our main focus is on their application to ground tests on aircraft structures as part of the Smarter Testing project in collaboration with Airbus, Centre for Modelling & Simulation, Dassault Systèmes, GOM UK Ltd, and the National Physical Laboratory with funding from the Aerospace Technology Institute.  Together we are working towards digital continuity across virtual and physical testing of aircraft structures to provide live data fusion and enable condition-led inspections, test control and validation of computational models.  We anticipate these advances will reduce time and costs for physical tests and accelerate the development of new designs of aircraft that will contribute to global sustainability targets (the aerospace industry has committed to reduce CO2 emissions to 50% of 2005 levels by 2050).  The Smarter Testing project has an ambitious goal which reveals that our pieces of the jigsaw puzzle belong to a small section of a much larger one.

For more on the Smarter Testing project see:

https://www.aerospacetestinginternational.com/news/structural-testing/smarter-testing-research-program-to-link-virtual-and-physical-aerospace-testing.html

https://www.aerospacetestinginternational.com/opinion/how-integrating-the-virtual-and-physical-will-make-aerospace-testing-and-certification-smarter.html

References

Alexiadis A, Ferson S, Patterson EA. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society open science. 8(3):201086, 2021.

Christian WJ, Dean AD, Dvurecenska K, Middleton CA, Patterson EA. Comparing full-field data from structural components with complicated geometries. Royal Society open science. 8(9):210916, 2021.

Image: http://www.airbus.com/galleries/photo-gallery

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.