Category Archives: MyResearch

Our last DIMES

Photograph of wing test in AWICThirty-three months ago (see ‘Finding DIMES‘ on February 6th, 2019) we set off at a gallop ‘to develop and demonstrate an automated measurement system that integrates a range of measurement approaches to enable damage and cracks to be detected and monitored as they originate at multi-material interfaces in an aircraft assembly’. The quotation is taken directly from the aim of the DIMES project which was originally planned and funded as a two-year research programme. Our research, in particular the demonstration element, has been slowed down by the pandemic and we resorted to two no-cost extensions, initially for three months and then for six months to achieve the project aim.   Two weeks ago, we held our final review meeting, and this week we will present our latest results in the third of a series of annual workshops hosted by Airbus, the project’s topic manager.   The DIMES system combines visual and infrared cameras with resistance strain gauges and fibre Bragg gratings to detect 1 mm cracks in metals and damage indications in composites that are only 6 mm in diameter.  We had a concept design by April 2019 (see ‘Joining the dots‘ on July 10th, 2019) and a detailed design by August 2019.  Airbus supplied us with a section of A320 wing, and we built a test-bench at Empa in Zurich in which we installed our prototype measurement system in the last quarter of 2019 (see ‘When seeing nothing is a success‘ on December 11th, 2019).  Then, the pandemic intervened and we did not finish testing until May 2021 by which time, we had also evaluated it for monitoring damage in composite A350 fuselage panels (see ‘Noisy progressive failure of a composite panel‘ on June 30th, 2021).  In parallel, we have installed our ‘DIMES system’ in ground tests on an aircraft wing at Airbus in Filton (see image) and, using a remote installation, in a cockpit at Airbus in Toulouse (see ‘Most valued player performs remote installation‘ on December 2nd, 2020), as well as an aircraft at NRC Aerospace in Ottawa (see ‘An upside to lockdown‘ on April 14th 2021).   Our innovative technology allows condition-led monitoring based on automated damage detection and enables ground tests on aircraft structures to be run 24/7 saving about 3 months on each year-long test.

The University of Liverpool is the coordinator of the DIMES project and the other partners are Empa, Dantec Dynamics GmbH and Strain Solutions Ltd.

The DIMES project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 820951.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.

Noisy progressive failure of a composite panel

Photograph showing close-up of progressive failure in a composite materialComposite materials have revolutionized many fields of engineering by providing lightweight strong components whose internal structure can be tailored to optimise their load-bearing capabilities. Engineering composites consist of high-strength fibres embedded in a lightweight matrix that keeps the fibres in position and provides the shape of the component.  While many composite materials have an impressive structural performance, some also exhibit spectacular failure modes with noises like guitar strings snapping when fibres start to fail and with jagged eruptions of material appearing on the surface, as shown in the image.  A year ago, I reported on our work in the DIMES project, to test the capabilities of our integrated measurement system to detect and track damage in real-time in a metallic section from an aircraft wing [see ‘Condition monitoring using infrared imaging‘ on June 17th, 2020].  Last month, we completed a further round of tests at Empa to demonstrate the system’s capabilities on composite structures which have been tested almost to destruction.  One of the advantages of composite structures is their capability to function and bear load despite quite high levels of damage, which meant we were able to record the progressive rupture of one of our test panels during cyclic fatigue loading.  Watch and listen to this short video to see and hear the material being torn apart – ignore the loud creaking and groaning from the test rig, it’s the quieter sound like dead leaves being swept up.

The University of Liverpool is the coordinator of the DIMES project and the other partners are Empa, Dantec Dynamics GmbH and Strain Solutions LtdAirbus is the topic manager on behalf of the Clean Sky 2 Joint Undertaking.

The DIMES project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 820951.

 

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Digital twins that thrive in the real-world

Decorative image

Windows of the Soul II [3D video art installation: http://www.haigallery.com/sonia-falcone/%5D

Digital twins are becoming ubiquitous in many areas of engineering [see ‘Can you trust your digital twin?‘ on November 23rd, 2016].  Although at the same time, the terminology is becoming blurred as digital shadows and digital models are treated as if they are synonymous with digital twins.  A digital model is a digitised replica of physical entity which lacks any automatic data exchange between the entity and its replica.  A digital shadow is the digital representation of a physical object with a one-way flow of information from the object to its representation.  But a digital twin is a functional representation with a live feedback loop to its counterpart in the real-world.  The feedback loop is based on a continuous update to the digital twin about the condition and performance of the physical entity based on data from sensors and on analysis from the digital twin about the performance of the physical entity.  This enables a digital twin to provide a service to many stakeholders.  For example, the users of a digital twin of an aircraft engine could include the manufacturer, the operator, the maintenance providers and the insurers.  These capabilities imply digital twins are themselves becoming products which exist in a digital context that might connect many digital products thus forming an integrated digital environment.  I wrote about integrated digital environments when they were a concept and the primary challenges were technical in nature [see ‘Enabling or disruptive technology for nuclear engineering?‘ on January 28th, 2015].  Many of these technical challenges have been resolved and the next set of challenges are economic and commercial ones associated with launching digital twins into global markets that lack adequate understanding, legislation, security, regulation or governance for digital products.  In collaboration with my colleagues at the Virtual Engineering Centre, we have recently published a white paper, entitled ‘Transforming digital twins into digital products that thrive in the real world‘ that reviews these issues and identifies the need to establish digital contexts that embrace the social, economic and technical requirements for the appropriate use of digital twins [see ‘Digital twins could put at risk what it means to be human‘ on November 18th, 2020].