Tag Archives: computational modelling

Finding DIMES

A couple of weeks ago I wrote about the ‘INSTRUCTIVE final reckoning’ (see post on January 9th).  INSTRUCTIVE was an EU project, which ended on December 31st, 2018  in which we demonstrated that infra-red cameras could be used to monitor the initiation and propagation of cracks in aircraft structures (see Middleton et al, 2019).  Now, we have seamlessly moved on to a new EU project, called DIMES (Development of Integrated MEasurement Systems), which started on January 1st, 2019.  To quote our EU documentation, the overall aim of DIMES is ‘to develop and demonstrate an automated measurement system that integrates a range of measurement approaches to enable damage and cracks to be detected and monitored as they originate at multi-material interfaces in an aircraft assembly’.  In simpler terms, we are going to take the results from the INSTRUCTIVE project, integrate them with other existing technologies for monitoring the structural health of an aircraft, and produce a system that can be installed in an aircraft fuselage and will provide early warning on the formation of cracks.  We have two years to achieve this target and demonstrate the system in a ground-based test on a real fuselage at an Airbus facility.  This was a scary prospect until we had our kick-off meeting and a follow-up brainstorming session a couple of weeks ago.  Now, it’s a little less scary.  If I have scared you with the prospect of cracks in aircraft, then do not be alarmed; we have been flying aircraft with cracks in them for years.  It is impossible to build an aircraft without cracks appearing, possibly during manufacturing and certainly in service – perfection (i.e. cracklessness) is unattainable and instead the stresses are maintained low enough to ensure undetected cracks will not grow (see ‘Alan Arnold Griffith’ on April 26th, 2017) and that detected ones are repaired before they propagate significantly (see ‘Aircraft inspection’ on October 10th, 2018).

I should explain that the ‘we’ above is the University of Liverpool and Strain Solutions Limited, who were the partners in INSTRUCTIVE, plus EMPA, the Swiss National Materials Laboratory, and Dantec Dynamics GmbH, a producer of scientific instruments in Ulm, Germany.  I am already working with these latter two organisations in the EU project MOTIVATE; so, we are a close-knit team who know and trust each other  – that’s one of the keys to successful collaborations tackling ambitious challenges with game-changing outcomes.

So how might the outcomes of DIMES be game-changing?  Well, at the moment, aircraft are designed using computer models that are comprehensively validated using measurement data from a large number of expensive experiments.  The MOTIVATE project is about reducing the number of experiments and increasing the quality and quantity of information gained from each experiment, i.e. ‘Getting Smarter’ (see post on June 21st 2017).  However, if the measurement system developed in DIMES allowed us to monitor in-flight strain fields in critical locations on-board an aircraft, then we would have high quality data to support future design work, which would allow further reductions in the campaign of experiments required to support new designs; and we would have continuous comprehensive monitoring of the structural integrity of every aircraft in the fleet, which would allow more efficient planning of maintenance as well as increased safety margins, or reductions in structural weight while maintaining safety margins.  This would be a significant step towards digital twins of aircraft (see ‘Fourth industrial revolution’ on July 4th, 2018 and ‘Can you trust your digital twin?’ on November 23rd, 2016).

The INSTRUCTIVE, MOTIVATE and DIMES projects have received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 685777, No. 754660 and No. 820951 respectively.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Sources:

Middleton CA, Gaio A, Greene RJ & Patterson EA, Towards automated tracking of initiation and propagation of cracks in Aluminium alloy coupons using thermoelastic stress analysis, J. Non-destructive Testing, 38:18, 2019

 

Hierarchical modelling in engineering and biology

In the 1979 Glenn Harris proposed an analytical hierarchy of models for estimating tactical force effectiveness for the US Army which was represented as a pyramid with four layers with a theatre/campaign simulation at the apex supported by mission level simulations below which was engagement model and engineering models of assets/equipment at the base.  The idea was adopted by the aerospace industry [see the graphic on the left] who place the complete aircraft on the apex supported by systems, sub-systems and components beneath in increasing numbers with the pyramid divided vertically in half to represent physical tests on one side and simulations on the other.  This represents the need to validate predictions from computational models with measurements in the real-world [see post on ‘Model validation‘ on September 18th, 2012]. These diagrams are schematic representations used by engineers to plan and organise the extensive programmes of modelling and physical testing undertaken during the design of new aircraft [see post on ‘Models as fables‘ on March 16th, 2016].  The objective of the MOTIVATE research project is to reduce quantity and increase the quality of the physical tests so that pyramid becomes lop-sided, i.e. the triangle representing the experiments and tests is a much thinner slice than the one representing the modelling and simulations [see post on ‘Brave New World‘ on January 10th, 2018].

At the same time, I am working with colleagues in toxicology on approaches to establishing credibility in predictive models for chemical risk assessment.  I have constructed an equivalent pyramid to represent the system hierarchy which is shown on the right in the graphic.  The challenge is the lack of measurement data in the top left of the pyramid, for both moral and legal reasons, which means that there is very limited real-world data available to confirm the predictions from computational models represented on the right of the pyramid.  In other words, my colleagues in toxicology, and computational biology in general, are where my collaborators in the aerospace industry would like to be while my collaborators in the aerospace want to be where the computational biologists find themselves already.  The challenge is that in both cases a paradigm shift is required from objectivism toward relativism;  since, in the absence of comprehensive real-world measurement data, validation or confirmation of predictions becomes a social process involving judgement about where the predictions lie on a continuum of usefulness.

Sources:

Harris GL, Computer models, laboratory simulators, and test ranges: meeting the challenge of estimating tactical force effectiveness in the 1980’s, US Army Command and General Staff College, May 1979.

Trevisani DA & Sisti AF, Air Force hierarchy of models: a look inside the great pyramid, Proc. SPIE 4026, Enabling Technology for Simulation Science IV, 23 June 2000.

Patterson EA & Whelan MP, A framework to establish credibility of computational models in biology, Progress in Biophysics and Molecular Biology, 129:13-19, 2017.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015