Tag Archives: validation

Diving into three-dimensional fluids

My research group has been working for some years on methods that allow straightforward comparison of large datasets [see ‘Recognizing strain’ on October 28th 2015].  Our original motivation was to compare maps of predicted strain over the surface of engineering structures with maps of measurements.  We have used these comparison methods to validate predictions produced by computational models [see ‘Million to one’ on November 21st 2018] and to identify and track changes in the condition of engineering structures [see ‘Out of the valley of death into a hype cycle’ on February 24th 2021].  Recently, we have extended this second application to tracking changes in the environment including the occurance of El Niño events [see ‘From strain measurements to assessing El Niño events’ on March 17th, 2021].  Now, we are hoping to extend this research into fluid mechanics by using our techniques to compare flow patterns.  We have had some success in exploring the use of methods to optimise the design of the mesh of elements used in computational fluid dynamics to model some simple flow regimes.  We are looking for a PhD student to work on extending our model validation techniques into fluid mechanics using volumes of data from measurement and predictions rather than fields, i.e., moving from two-dimensional to three-dimensional datasets.  If you are interested or know someone who might be interested then please get in touch.

There is more information on the PhD project here.

From strain measurements to assessing El Niño events

Figure 11 from RSOS 201086One of the exciting aspects of leading a university research group is that you can never be quite sure where the research is going next.  We published a nice example of this unpredictability last week in Royal Society Open Science in a paper called ‘Transformation of measurement uncertainties into low-dimensional feature space‘ [1].  While the title is an accurate description of the contents, it does not give much away and certainly does not reveal that we proposed a new method for assessing the occurrence of El Niño events.  For some time we have been working with massive datasets of measurements from arrays of sensors and representing them by fitting polynomials in a process known as image decomposition [see ‘Recognising strain‘ on October 28th, 2015]. The relatively small number of coefficients from these polynomials can be collated into a feature vector which facilitates comparison with other datasets [see for example, ‘Out of the valley of death into a hype cycle‘ on February 24th, 2021].  Our recent paper provides a solution to the issue of representing the measurement uncertainty in the same space as the feature vector which is roughly what we set out to do.  We demonstrated our new method for representing the measurement uncertainty by calibrating and validating a computational model of a simple beam in bending using data from an earlier study in a EU-funded project called VANESSA [2] — so no surprises there.  However, then my co-author and PhD student, Antonis Alexiadis went looking for other interesting datasets with which to demonstrate the new method.  He found a set of spatially-varying uncertainties associated with a metamodel of soil moisture in a river basin in China [3] and global oceanographic temperature fields collected monthly over 11 years from 2002 to 2012 [4].  We used the latter set of data to develop a new technique for assessing the occurrence of El-Niño events in the Pacific Ocean.  Our technique is based on global ocean dynamics rather than on the small region in the Pacific Ocean which is usually used and has the added advantages of providing a confidence level on the assessment as well as enabling straightforward comparisons of predictions and measurements.  The comparison of predictions and measurements is a recurring theme in our current research but I did not expect it to lead into ocean dynamics.

Image is Figure 11 from [1] showing convex hulls fitted to the cloud of points representing the uncertainty intervals for the ocean temperature measurements for each month in 2002 using only the three most significant principal components . The lack of overlap between hulls can be interpreted as implying a significant difference in the temperature between months.

References:

[1] Alexiadis, A. and Ferson, S. and  Patterson, E.A., , 2021. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society Open Science, 8(3): 201086.

[2] Lampeas G, Pasialis V, Lin X, Patterson EA. 2015.  On the validation of solid mechanics models using optical measurements and data decomposition. Simulation Modelling Practice and Theory 52, 92-107.

[3] Kang J, Jin R, Li X, Zhang Y. 2017, Block Kriging with measurement errors: a case study of the spatial prediction of soil moisture in the middle reaches of Heihe River Basin. IEEE Geoscience and Remote Sensing Letters, 14, 87-91.

[4] Gaillard F, Reynaud T, Thierry V, Kolodziejczyk N, von Schuckmann K. 2016. In situ-based reanalysis of the global ocean temperature and salinity with ISAS: variability of the heat content and steric height. J. Climate. 29, 1305-1323.

Out of the valley of death into a hype cycle?

Fig 5 from Middleton et al with full captionThe capability to identify damage and track its propagation in structures is important in ensuring the safe operation of a wide variety of engineering infrastructure, including aircraft structures. A few years ago, I wrote about research my group was performing, in the INSTRUCTIVE project [see ‘INSTRUCTIVE final reckoning‘ on January 9th, 2019] with Airbus and Strain Solutions Limited, to deliver a new tool for monitoring the development of damage using thermoelastic stress analysis (TSA) [see ‘Counting photons to measure stress‘ on November 18th, 2015].  We collected images using a TSA system while a structural component was subject to cycles of load that caused damage to initiate and propagate during a fatigue test. The series of images were analysed using a technique based on optical flow to identify apparent movement between the images which was taken as indication of the development of damage [1]. We demonstrated that our technique could indicate the presence of a crack less than a millimetre in length and even identify cracks initiating under the heads of bolts using experiments performed in our laboratory [see ‘INSTRUCTIVE update‘ on October 4th, 2017].  However, this technique was susceptible to errors in the images when we tried to use low-cost sensors and to changes in the images caused by flight cycle loading with varying amplitude and frequency of loads.  Essentially, the optical flow approach could be fooled into identifying damage propagation when a sensor delivered a noisy image or the shape of the load cycle was changed.  We have now overcome this short-coming by replacing the optical flow approach with the orthogonal decomposition technique [see ‘Recognising strain‘ on October 28th, 2015] that we developed for comparing data fields from measurements and predictions in validation processes [see ‘Million to one‘ on November 21st, 2018] .  Each image is decomposed to a feature vector and differences between the feature vectors are indicative of damage development (see schematic in thumbnail from [2]).  The new technique, which we have named the differential feature vector method, is sufficiently robust that we have been able to use a sensor costing 1% of the price of a typical TSA system to identify and track cracks during cyclic loading.  The underpinning research was published in December 2020 by the Royal Society [2] and the technique is being implemented in full-scale ground-tests on aircraft structures as part of the DIMES project.  Once again, a piece of technology is emerging from the valley of death [see ‘Slowly crossing the valley of death‘ on January 27th, 2021] and, without wishing to initiate the hype cycle [see ‘Hype cycle‘ on September 23rd, 2015], I hope it will transform the use of thermal imaging for condition monitoring.

Logos of Clean Sky 2 and EUThe INSTRUCTIVE and DIMES projects have received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 685777 and No. 820951 respectively.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

References

[1] Middleton CA, Gaio A, Greene RJ & Patterson EA, Towards automated tracking of initiation and propagation of cracks in Aluminium alloy coupons using thermoelastic stress analysis, J. Non-destructive Testing, 38:18, 2019.

[2] Middleton CA, Weihrauch M, Christian WJR, Greene RJ & Patterson EA, Detection and tracking of cracks based on thermoelastic stress analysis, R. Soc. Open Sci. 7:200823, 2020.

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.

References:

[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.