I spent most of last week at the European Union’s Joint Research Centre in Ispra, Italy. I have been collaborating with the scientists in the European Union Reference Laboratory for alternatives to animal testing [EURL ECVAM]. We have been working together on tracking nanoparticles and, more recently, on the validity and credibility of models. Last week I was there to participate in a workshop on Validation and Acceptance of Artificial Intelligence Models in Health. I presented our work on the credibility matrix and on a set of factors that we have developed for establishing trust in a model and its predictions. I left the JRC on Friday evening and slipped back in the UK just before she left the Europe Union. The departure of the UK from Europe reminds me of a novel by José Saramago called ‘The Stone Raft‘ in which the Iberian penisula breaks off from the Europe mainland and drifts around the Atlantic ocean. The bureaucrats in Europe have to run around dealing with the ensuing disruption while five people in Spain and Portugal are drawn together by surreal events on the stone raft adrift in the ocean.
I need to confess to writing a misleading post some months ago entitled ‘In Einstein’s footprints?‘ on February 27th 2019, in which I promoted our 4th workshop on the ‘Validation of Computational Mechanics Models‘ that we held last month at Guild Hall of Carpenters [Zunfthaus zur Zimmerleuten] in Zurich. I implied that speakers at the workshop would be stepping in Einstein’s footprints when they presented their research at the workshop, because Einstein presented a paper at the same venue in 1910. However, as our host in Zurich revealed in his introductory remarks , the Guild Hall was gutted by fire in 2007 and so we were meeting in a fake, or replica, which was so good that most of us had not realised. This was quite appropriate because a theme of the workshop was enhancing the credibility of computer models that are used to replicate the real-world. We discussed the issues surrounding the trustworthiness of models in a wide range of fields including aerospace engineering, biomechanics, nuclear power and toxicology. Many of the presentations are available on the website of the EU project MOTIVATE which organised and sponsored the workshop as part of its dissemination programme. While we did not solve any problems, we did broaden people’s understanding of the issues associated with trustworthiness of predictions and identified the need to develop common approaches to support regulatory decisions across a range of industrial sectors – that’s probably the theme for our 5th workshop!
The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.
The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.
A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’. One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making. This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity. These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators. Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017]. Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.
These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts. So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘. Feel free to have a browse!
There is about a 3% probability that you have a twin. About 32 in 1000 people are one of a pair of twins. At the moment an even smaller number of us have a digital twin but this is the direction in which computational biomedicine is moving along with other fields. For instance, soon all aircraft will have digital twins and most new nuclear power plants. Digital twins are computational representations of individual members of a population, or fleet, in the case of aircraft and power plants. For an engineering system, its computer-aided design (CAD) is the beginning of its twin, to which information is added from the quality assurance inspections before it leaves the factory and from non-destructive inspections during routine maintenance, as well as data acquired during service operations from health monitoring. The result is an integrated model and database, which describes the condition and history of the system from conception to the present, that can be used to predict its response to anticipated changes in its environment, its remaining useful life or the impact of proposed modifications to its form and function. It is more challenging to create digital twins of ourselves because we don’t have original design drawings or direct access to the onboard health monitoring system but this is being worked on. However, digital twins are only useful if people believe in the behaviour or performance that they predict and are prepared to make decisions based on the predictions, in other words if the digital twins possess credibility. Credibility appears to be like beauty because it is in eye of the beholder. Most modellers believe that their models are both beautiful and credible, after all they are their ‘babies’, but unfortunately modellers are not usually the decision-makers who often have a different frame of reference and set of values. In my group, one current line of research is to provide metrics and language that will assist in conveying confidence in the reliability of a digital twin to non-expert decision-makers and another is to create methodologies for evaluating the evidence prior to making a decision. The approach is different depending on the extent to which the underlying models are principled, i.e. based on the laws of science, and can be tested using observations from the real world. In practice, even with principled, testable models, a digital twin will never be an identical twin and hence there will always be some uncertainty so that decisions remain a matter of judgement based on a sound understanding of the best available evidence – so you are always likely to need advice from a friendly engineer 🙂
Glaessgen, E.H., & Stargel, D.S., 2012, The digital twin paradigm for future NASA and US Air Force vehicles, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA paper 2012-2018, NF1676L-13293.
Patterson E.A., Feligiotti, M. & Hack, E., 2013, On the integration of validation, quality assurance and non-destructive evaluation, J. Strain Analysis, 48(1):48-59.
Patterson, E.A., Taylor, R.J. & Bankhead, M., 2016, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103.
Tuegel, E.J., 2012, The airframe digital twin: some challenges to realization, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference.