Tag Archives: MyResearch

A tiny contribution to culture?

img-20161204-wa00031112This year I would like to think more and do a little less. Or, in other words, to make a better job of fewer things.  This resolution has caused me to think about why I write this blog and whether I should continue to do so.  I started writing it in 2012 as part of an outreach effort mandated by a Royal Society Wolfson Research Merit Award that I held for five years until February 2016. So, the original motivation for writing a weekly blog has expired but obviously I have continued – why?

Well, a number of reasons come to mind, first: loyalty to my readers – in 2015 visitors to this blog would have filled six New York subway trains [see my post of January 21st, 2016].  The number of visitors more than doubled in 2016 so that now you would fill a small Premier league football stadium.  It’s difficult to disappoint this number of readers.

Second: the annual doubling of the blog’s readership perhaps suggests that I am doing something worthwhile – making a small contribution to our culture and society.  To quote the neuroscientist Vittorio Gallese in conversation with Stefan Klein ‘by passing on just a little bit of knowledge, every human being makes a contribution to that culture’.   Most of the time this is an altruistic motivation but occasionally it is converted into an inner warm glow when I meet someone who says ‘I read your blog and …’

The third reason is purely selfish: the process of writing is therapeutic and provides an opportunity to collect, order and record my thoughts and ideas.  My editor thinks that I focus too much on re-blogging other peoples’ ideas and that more originality would bring a bigger increase in readership. She is probably right about the connection between originality and readership but original thinking is hard to do, especially on a weekly basis, so often the best I can do is to join dots in ways that perhaps you haven’t thought about.

My final reason is more pecuniary. As an academic researcher, I need to apply for funding to support my research group of about a dozen people.  Engagement in enhancing the public understanding of science and technology is an expectation of many funding bodies and so an established blog with a stadium-sized readership is an asset that justifies the investment of time.

The relative importance of these reasons varies with my mood and audience but together they are sufficient to ensure that writing a weekly post will be one of the fewer things that I plan to do better in 2017.  I guess that means fewer introspective posts like this one!

Best wishes for a happy and prosperous New Year to all my readers!

Source: Stefan Klein, We are all stardust, London: Scribe, 2015.

Can you trust your digital twin?

Author's digital twin?

Author’s digital twin?

There is about a 3% probability that you have a twin. About 32 in 1000 people are one of a pair of twins.  At the moment an even smaller number of us have a digital twin but this is the direction in which computational biomedicine is moving along with other fields.  For instance, soon all aircraft will have digital twins and most new nuclear power plants.  Digital twins are computational representations of individual members of a population, or fleet, in the case of aircraft and power plants.  For an engineering system, its computer-aided design (CAD) is the beginning of its twin, to which information is added from the quality assurance inspections before it leaves the factory and from non-destructive inspections during routine maintenance, as well as data acquired during service operations from health monitoring.  The result is an integrated model and database, which describes the condition and history of the system from conception to the present, that can be used to predict its response to anticipated changes in its environment, its remaining useful life or the impact of proposed modifications to its form and function. It is more challenging to create digital twins of ourselves because we don’t have original design drawings or direct access to the onboard health monitoring system but this is being worked on. However, digital twins are only useful if people believe in the behaviour or performance that they predict and are prepared to make decisions based on the predictions, in other words if the digital twins possess credibility.  Credibility appears to be like beauty because it is in eye of the beholder.  Most modellers believe that their models are both beautiful and credible, after all they are their ‘babies’, but unfortunately modellers are not usually the decision-makers who often have a different frame of reference and set of values.  In my group, one current line of research is to provide metrics and language that will assist in conveying confidence in the reliability of a digital twin to non-expert decision-makers and another is to create methodologies for evaluating the evidence prior to making a decision.  The approach is different depending on the extent to which the underlying models are principled, i.e. based on the laws of science, and can be tested using observations from the real world.  In practice, even with principled, testable models, a digital twin will never be an identical twin and hence there will always be some uncertainty so that decisions remain a matter of judgement based on a sound understanding of the best available evidence – so you are always likely to need advice from a friendly engineer   🙂

Sources:

De Lange, C., 2014, Meet your unborn child – before it’s conceived, New Scientist, 12 April 2014, p.8.

Glaessgen, E.H., & Stargel, D.S., 2012, The digital twin paradigm for future NASA and US Air Force vehicles, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA paper 2012-2018, NF1676L-13293.

Patterson E.A., Feligiotti, M. & Hack, E., 2013, On the integration of validation, quality assurance and non-destructive evaluation, J. Strain Analysis, 48(1):48-59.

Patterson, E.A., Taylor, R.J. & Bankhead, M., 2016, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103.

Patterson EA & Whelan MP, 2016, A framework to establish credibility of computational models in biology, Progress in Biophysics & Molecular Biology, doi: 10.1016/j.pbiomolbio.2016.08.007.

Tuegel, E.J., 2012, The airframe digital twin: some challenges to realization, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference.

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015