Tag Archives: computational biology

Establishing fidelity and credibility in tests & simulations (FACTS)

A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’.   One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making.  This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity.   These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators.  Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017].  Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.

These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts.  So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘.  Feel free to have a browse!

Hierarchical modelling in engineering and biology

In the 1979 Glenn Harris proposed an analytical hierarchy of models for estimating tactical force effectiveness for the US Army which was represented as a pyramid with four layers with a theatre/campaign simulation at the apex supported by mission level simulations below which was engagement model and engineering models of assets/equipment at the base.  The idea was adopted by the aerospace industry [see the graphic on the left] who place the complete aircraft on the apex supported by systems, sub-systems and components beneath in increasing numbers with the pyramid divided vertically in half to represent physical tests on one side and simulations on the other.  This represents the need to validate predictions from computational models with measurements in the real-world [see post on ‘Model validation‘ on September 18th, 2012]. These diagrams are schematic representations used by engineers to plan and organise the extensive programmes of modelling and physical testing undertaken during the design of new aircraft [see post on ‘Models as fables‘ on March 16th, 2016].  The objective of the MOTIVATE research project is to reduce quantity and increase the quality of the physical tests so that pyramid becomes lop-sided, i.e. the triangle representing the experiments and tests is a much thinner slice than the one representing the modelling and simulations [see post on ‘Brave New World‘ on January 10th, 2018].

At the same time, I am working with colleagues in toxicology on approaches to establishing credibility in predictive models for chemical risk assessment.  I have constructed an equivalent pyramid to represent the system hierarchy which is shown on the right in the graphic.  The challenge is the lack of measurement data in the top left of the pyramid, for both moral and legal reasons, which means that there is very limited real-world data available to confirm the predictions from computational models represented on the right of the pyramid.  In other words, my colleagues in toxicology, and computational biology in general, are where my collaborators in the aerospace industry would like to be while my collaborators in the aerospace want to be where the computational biologists find themselves already.  The challenge is that in both cases a paradigm shift is required from objectivism toward relativism;  since, in the absence of comprehensive real-world measurement data, validation or confirmation of predictions becomes a social process involving judgement about where the predictions lie on a continuum of usefulness.

Sources:

Harris GL, Computer models, laboratory simulators, and test ranges: meeting the challenge of estimating tactical force effectiveness in the 1980’s, US Army Command and General Staff College, May 1979.

Trevisani DA & Sisti AF, Air Force hierarchy of models: a look inside the great pyramid, Proc. SPIE 4026, Enabling Technology for Simulation Science IV, 23 June 2000.

Patterson EA & Whelan MP, A framework to establish credibility of computational models in biology, Progress in Biophysics and Molecular Biology, 129:13-19, 2017.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015