Tag Archives: simulation

Happy New Year!

Decorative photograph of sculpture of a skeletal person leading a skeletal dinosaurThis year I have written about 20,000 words in 52 posts (including this one); and, since this is the last post of the year, I thought I would take a brief look back at what has preoccupied me in 2021.  Perhaps, not surprisingly the impact of the coronavirus on our lifestyle has featured regularly – almost every week for a month between mid-March and mid-April when we were in lockdown in the UK.  However, the other topics that I have written about frequently are my research on the dynamics of nanoparticles and, in the last six months, on dealing with uncertainty in digital engineering and decision making.  I have also returned several times to innovation processes and transitioning lab-based research into industry.  While following the COP26 in early November, I wrote a series of three posts focussed on energy consumption and the paradigm shifts required to slow down climate change.  There are some connections between these topics: viruses are nanoparticles whose transport and dynamics we do not fully understand; and, digital engineering tools are being used to explore zero-carbon approaches to, for example, energy generation and air transport.  The level of complexity, innovation and urgency associated with developing solutions to these challenges mean that there are always some unknowns and uncertainty when making associated decisions.

The links below are grouped by the topics mentioned above.  I expect there will be more on all of these topics in 2022; however, the topic of next week’s post is unknown because I have not written any posts in advance.  I hope that the uncertainty about the topic of the next post will keep you reading in 2022! 

Coronavirus pandemic: ‘Distancing ourselves from each other‘ on January 13th, 2021; ‘On the impact of writing on well-being‘ on March 3rd, 2021; ‘Collegiality as a defence against pandemic burnout‘ on March 24th, 2021; ‘It’s tiring looking at yourself‘ on March 31st, 2021; ‘Switching off and walking in circles‘ on April 7th, 2021; ‘An upside to lockdown‘ on April 14th, 2021; ‘A brief respite in a long campaign to overcome coronavirus‘ on June 23rd, 2021; and ‘It is hard to remain positive‘ November 3rd 2021.

Energy and climate change: ‘When you invent the ship, you invent the shipwreck‘ on August 25th, 2021; ‘It is hard to remain positive‘ November 3rd 2021; ‘Where we are and what we have‘ on November 24th, 2021; ‘Disruptive change required to avoid existential threats‘ on December 1st, 2021; and ‘Bringing an end to thermodynamic whoopee‘ on December 8th, 2021.

Innovation processes: ‘Slowly crossing the valley of death‘ on January 27th, 2021; ‘Out of the valley of death into a hype cycle?‘ on February 24th, 2021; ‘Innovative design too far ahead of the market?‘ on May 5th, 2021 and ‘Jigsaw puzzling without a picture‘ on October 27th, 2021.

Nanoparticles: ‘Going against the flow‘ on February 3rd, 2021; ‘Seeing things with nanoparticles‘ on March 10th, 2021; and ‘Nano biomechanical engineering of agent delivery to cells‘ on December 15th, 2021.

Uncertainty: ‘Certainty is unattainable and near-certainty is unaffordable‘ on May 12th, 2021; ‘Neat earth objects make tomorrow a little less than certain‘ on May 26th, 2021; ‘Negative capability and optimal ambiguity‘ on July 7th, 2021; ‘Deep uncertainty and meta ignorance‘ on July 21st, 2021; ‘Somethings will always be unknown‘ on August 18th, 2021; ‘Jigsaw puzzling without a picture‘ on October 27th, 2021; and, ‘Do you know RIO?‘ on November 17th, 2021.

Jigsaw puzzling without a picture

A350 XWB passes Maximum Wing Bending test

A350 XWB passes Maximum Wing Bending test

Research sometimes feels like putting together a jigsaw puzzle without the picture or being sure you have all of the pieces.  The pieces we are trying to fit together at the moment are (i) image decomposition of strain fields [see ‘Recognising strain’ on October 28th 2015] that allows fields containing millions of data values to be represented by a feature vector with only tens of elements which is useful for comparing maps or fields of predictions from a computational model with measurements made in the real-world; (ii) evaluation of the variation in measurement uncertainty over a field of view of measured displacements or strains in a large structure [see ‘Industrial uncertainty’ on December 12th 2018] which provides information about the quality of the measurements; and (iii) a probabilistic validation metric that provides a measure of how well predictions from a computational model represent measurements made in the real world [see ‘Million to one’ on November 21st 2018].  We have found some of the missing pieces of the jigsaw, for example we have established how to represent the distribution of measurement uncertainty in the feature vector domain [see ‘From strain measurements to assessing El Niño events’ on March 17th 2021] so that it can be used to assess the significance of differences between measurements and predictions represented by their feature vectors – this connects (i) and (ii) together.  Very recently we have demonstrated a generic technique for performing image decomposition of irregularly shaped fields of data or data fields with holes [see Christian et al, 2021] which extends the applicability of our method for comparing measurements and predictions to real-world objects rather than idealised shapes.  This allows (i) to be used in industrial applications but we still have to work out how to connect this to the probabilistic metric in (iii) while also incorporating spatially-varying uncertainty.  These techniques can be used in a wide range of applications, as demonstrated in our recent work on El Niño events [see Alexiadis et al, 2021], because, by treating all fields of data as images, the techniques are agnostic about the source and format of the data.  However, at the moment, our main focus is on their application to ground tests on aircraft structures as part of the Smarter Testing project in collaboration with Airbus, Centre for Modelling & Simulation, Dassault Systèmes, GOM UK Ltd, and the National Physical Laboratory with funding from the Aerospace Technology Institute.  Together we are working towards digital continuity across virtual and physical testing of aircraft structures to provide live data fusion and enable condition-led inspections, test control and validation of computational models.  We anticipate these advances will reduce time and costs for physical tests and accelerate the development of new designs of aircraft that will contribute to global sustainability targets (the aerospace industry has committed to reduce CO2 emissions to 50% of 2005 levels by 2050).  The Smarter Testing project has an ambitious goal which reveals that our pieces of the jigsaw puzzle belong to a small section of a much larger one.

For more on the Smarter Testing project see:

https://www.aerospacetestinginternational.com/news/structural-testing/smarter-testing-research-program-to-link-virtual-and-physical-aerospace-testing.html

https://www.aerospacetestinginternational.com/opinion/how-integrating-the-virtual-and-physical-will-make-aerospace-testing-and-certification-smarter.html

References

Alexiadis A, Ferson S, Patterson EA. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society open science. 8(3):201086, 2021.

Christian WJ, Dean AD, Dvurecenska K, Middleton CA, Patterson EA. Comparing full-field data from structural components with complicated geometries. Royal Society open science. 8(9):210916, 2021.

Image: http://www.airbus.com/galleries/photo-gallery

Somethings will always be unknown

Decorative image of a fruit fly nervous system Albert Cardona HHMI Janelia Research Campus Welcome Image Awards 2015The philosophy of science has oscillated between believing that everything is knowable and that somethings will always be unknowable. In 1872, the German physiologist, Emil du Bois-Reymond declared ‘we do not know and will not know’ implying that there would always be limits to our scientific knowledge. Thirty years later, David Hilbert, a German mathematician stated that nothing is unknowable in the natural sciences. He believed that by considering some things to be unknowable we limited our ability to know. However, Kurt Godel, a Viennese mathematician who moved to Princeton in 1940, demonstrated in his incompleteness theorems that for any finite mathematical system there will always be statements which are true but unprovable and that a finite mathematical system cannot demonstrate its own consistency. I think that this implies some things will remain unknowable or at least uncertain. Godel believed that his theorems implied that the power of the human mind is infinitely more powerful than any finite machine and Roger Penrose has deployed these incompleteness theorems to argue that consciousness transcends the formal logic of computers, which perhaps implies that artificial intelligence will never replace human intelligence [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  At a more mundane level, Godel’s theorems imply that engineers will always have to deal with the unknowable when using mathematical models to predict the behaviour of complex systems and, of course, to avoid meta-ignorance, we have to assume that there are always unknown unknowns [see ‘Deep uncertainty and meta-ignorance‘ on July 21st, 2021].

Source: Book review by Nick Stephen, ‘Journey to the Edge of Reason by Stephen Budiansky – ruthless logic‘ FT Weekend, 1st June 2021.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.