Tag Archives: model validation

Diving into three-dimensional fluids

My research group has been working for some years on methods that allow straightforward comparison of large datasets [see ‘Recognizing strain’ on October 28th 2015].  Our original motivation was to compare maps of predicted strain over the surface of engineering structures with maps of measurements.  We have used these comparison methods to validate predictions produced by computational models [see ‘Million to one’ on November 21st 2018] and to identify and track changes in the condition of engineering structures [see ‘Out of the valley of death into a hype cycle’ on February 24th 2021].  Recently, we have extended this second application to tracking changes in the environment including the occurance of El Niño events [see ‘From strain measurements to assessing El Niño events’ on March 17th, 2021].  Now, we are hoping to extend this research into fluid mechanics by using our techniques to compare flow patterns.  We have had some success in exploring the use of methods to optimise the design of the mesh of elements used in computational fluid dynamics to model some simple flow regimes.  We are looking for a PhD student to work on extending our model validation techniques into fluid mechanics using volumes of data from measurement and predictions rather than fields, i.e., moving from two-dimensional to three-dimensional datasets.  If you are interested or know someone who might be interested then please get in touch.

There is more information on the PhD project here.

Jigsaw puzzling without a picture

A350 XWB passes Maximum Wing Bending test

A350 XWB passes Maximum Wing Bending test

Research sometimes feels like putting together a jigsaw puzzle without the picture or being sure you have all of the pieces.  The pieces we are trying to fit together at the moment are (i) image decomposition of strain fields [see ‘Recognising strain’ on October 28th 2015] that allows fields containing millions of data values to be represented by a feature vector with only tens of elements which is useful for comparing maps or fields of predictions from a computational model with measurements made in the real-world; (ii) evaluation of the variation in measurement uncertainty over a field of view of measured displacements or strains in a large structure [see ‘Industrial uncertainty’ on December 12th 2018] which provides information about the quality of the measurements; and (iii) a probabilistic validation metric that provides a measure of how well predictions from a computational model represent measurements made in the real world [see ‘Million to one’ on November 21st 2018].  We have found some of the missing pieces of the jigsaw, for example we have established how to represent the distribution of measurement uncertainty in the feature vector domain [see ‘From strain measurements to assessing El Niño events’ on March 17th 2021] so that it can be used to assess the significance of differences between measurements and predictions represented by their feature vectors – this connects (i) and (ii) together.  Very recently we have demonstrated a generic technique for performing image decomposition of irregularly shaped fields of data or data fields with holes [see Christian et al, 2021] which extends the applicability of our method for comparing measurements and predictions to real-world objects rather than idealised shapes.  This allows (i) to be used in industrial applications but we still have to work out how to connect this to the probabilistic metric in (iii) while also incorporating spatially-varying uncertainty.  These techniques can be used in a wide range of applications, as demonstrated in our recent work on El Niño events [see Alexiadis et al, 2021], because, by treating all fields of data as images, the techniques are agnostic about the source and format of the data.  However, at the moment, our main focus is on their application to ground tests on aircraft structures as part of the Smarter Testing project in collaboration with Airbus, Centre for Modelling & Simulation, Dassault Systèmes, GOM UK Ltd, and the National Physical Laboratory with funding from the Aerospace Technology Institute.  Together we are working towards digital continuity across virtual and physical testing of aircraft structures to provide live data fusion and enable condition-led inspections, test control and validation of computational models.  We anticipate these advances will reduce time and costs for physical tests and accelerate the development of new designs of aircraft that will contribute to global sustainability targets (the aerospace industry has committed to reduce CO2 emissions to 50% of 2005 levels by 2050).  The Smarter Testing project has an ambitious goal which reveals that our pieces of the jigsaw puzzle belong to a small section of a much larger one.

For more on the Smarter Testing project see:

https://www.aerospacetestinginternational.com/news/structural-testing/smarter-testing-research-program-to-link-virtual-and-physical-aerospace-testing.html

https://www.aerospacetestinginternational.com/opinion/how-integrating-the-virtual-and-physical-will-make-aerospace-testing-and-certification-smarter.html

References

Alexiadis A, Ferson S, Patterson EA. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society open science. 8(3):201086, 2021.

Christian WJ, Dean AD, Dvurecenska K, Middleton CA, Patterson EA. Comparing full-field data from structural components with complicated geometries. Royal Society open science. 8(9):210916, 2021.

Image: http://www.airbus.com/galleries/photo-gallery

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.

From strain measurements to assessing El Niño events

Figure 11 from RSOS 201086One of the exciting aspects of leading a university research group is that you can never be quite sure where the research is going next.  We published a nice example of this unpredictability last week in Royal Society Open Science in a paper called ‘Transformation of measurement uncertainties into low-dimensional feature space‘ [1].  While the title is an accurate description of the contents, it does not give much away and certainly does not reveal that we proposed a new method for assessing the occurrence of El Niño events.  For some time we have been working with massive datasets of measurements from arrays of sensors and representing them by fitting polynomials in a process known as image decomposition [see ‘Recognising strain‘ on October 28th, 2015]. The relatively small number of coefficients from these polynomials can be collated into a feature vector which facilitates comparison with other datasets [see for example, ‘Out of the valley of death into a hype cycle‘ on February 24th, 2021].  Our recent paper provides a solution to the issue of representing the measurement uncertainty in the same space as the feature vector which is roughly what we set out to do.  We demonstrated our new method for representing the measurement uncertainty by calibrating and validating a computational model of a simple beam in bending using data from an earlier study in a EU-funded project called VANESSA [2] — so no surprises there.  However, then my co-author and PhD student, Antonis Alexiadis went looking for other interesting datasets with which to demonstrate the new method.  He found a set of spatially-varying uncertainties associated with a metamodel of soil moisture in a river basin in China [3] and global oceanographic temperature fields collected monthly over 11 years from 2002 to 2012 [4].  We used the latter set of data to develop a new technique for assessing the occurrence of El-Niño events in the Pacific Ocean.  Our technique is based on global ocean dynamics rather than on the small region in the Pacific Ocean which is usually used and has the added advantages of providing a confidence level on the assessment as well as enabling straightforward comparisons of predictions and measurements.  The comparison of predictions and measurements is a recurring theme in our current research but I did not expect it to lead into ocean dynamics.

Image is Figure 11 from [1] showing convex hulls fitted to the cloud of points representing the uncertainty intervals for the ocean temperature measurements for each month in 2002 using only the three most significant principal components . The lack of overlap between hulls can be interpreted as implying a significant difference in the temperature between months.

References:

[1] Alexiadis, A. and Ferson, S. and  Patterson, E.A., , 2021. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society Open Science, 8(3): 201086.

[2] Lampeas G, Pasialis V, Lin X, Patterson EA. 2015.  On the validation of solid mechanics models using optical measurements and data decomposition. Simulation Modelling Practice and Theory 52, 92-107.

[3] Kang J, Jin R, Li X, Zhang Y. 2017, Block Kriging with measurement errors: a case study of the spatial prediction of soil moisture in the middle reaches of Heihe River Basin. IEEE Geoscience and Remote Sensing Letters, 14, 87-91.

[4] Gaillard F, Reynaud T, Thierry V, Kolodziejczyk N, von Schuckmann K. 2016. In situ-based reanalysis of the global ocean temperature and salinity with ISAS: variability of the heat content and steric height. J. Climate. 29, 1305-1323.