Category Archives: MyResearch

Coping with uncertainty

The first death of driver in a car while using Autopilot has been widely reported with much hyperbole though with a few notable exceptions, for instance Nick Bilton in Vanity Fair on July 7th, 2016 who pointed out that you were safer statistically in a Tesla with its Autopilot functioning than driving normally.  This is based on the fact that worldwide there is a fatality for every 60 million miles driven, or every 94 million miles in the US, whereas Joshua Brown’s tragic death was the first in 130 million miles driven by Teslas with Autopilot activated.  This implies that globally you are twice as likely to survive your next car journey in an autonomously driven Tesla than in a manually driven car.

If you decide to go by plane instead then the probability of arriving safely is extremely good because only one in every 3 million flights last year resulted in fatalities or put another way: 3.3 billion passengers were transported with the loss of 641 lives, which is a one in 5 million.  People worry about these probabilities while at the same time buying lottery tickets with a much lower probability of winning the jackpot, which is about 1 in 14 million in the UK.  In all of these cases, the probability is saying something about the frequency of occurance of these events.  We don’t know whether the plane will crash on the next flight we take so we rationalise this uncertainty by defining the frequency of flights that end in a fatal crash.  The French mathematician, Pierre-Simon Laplace (1749-1827) thought about probability as a measure of our ignorance or uncertainty.  As we have come to realise the extent of our uncertainty about many things in science (see my post: ‘Electron Uncertainty‘ on July 27th, 2016) and life (see my post: ‘Unexpected bad news for turkeys‘ on November 25th, 2015), the more important the concept of probability has become.   Caputo has argued that ‘a post-modern style demands a capacity to sustain uncertainty and instability, to live with the unforeseen and unpredictable as positive conditions of the possibility of an open-ended future’.  Most of us can manage this concept when the open-ended future is a lottery jackpot but struggle with the remaining uncertainties of life, particularly when presented with new ones, such as autonomous cars.

Sources:

Bilton, N., How the media screwed up the fatal Tesla accident, Vanity Fair, July 7th, 2016

IATA Safety Report 2014

Caputo JD, Truth: Philosophy in Transit, London: Penguin 2013.

Ball, J., How safe is air travel really? The Guardian, July 24th, 2014

Boagey, R., Who’s behind the wheel? Professional Engineering, 29(8):22-26, August 2016.

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015

Insidious damage

bikeRecently, my son bought a carbon-fibre framed bike for his commute to work. He talked to me about it before he made the decision to go ahead because he was worried about the susceptibility of carbon-fibre to impact damage. The aircraft industry worries about barely visible impact damage (BVID) because while the damage might be barely visible on the accessible face that received the impact, within the carbon-fibre component there can be substantial life-shortening damage. I reassured my son that it is unlikely a road bike would receive impacts of sufficient energy to induce life-shortening damage, at least in ordinary use. However, such impacts are not unusual in aircraft structures which means that they have to be inspected for hidden, insidious damage. The most common method of inspection is based on ultrasound that is reflected preferentially by the damaged areas so that the shape and extent of damage can be mapped. It is difficult to predict the effect on the structural performance of the component from this morphology information so that, when damage is found, the component is usually repaired or replaced immediately. In my research group we have been exploring the use of strain measurements to locate and assess damage by comparing the strain distributions in as-manufactured and in-service components. We can measure the strain fields in components using a number of techniques including digital image correlation (see my post entitled ‘256 shades of grey’) and thermoelastic stress analysis (see my post entitled ‘Counting photons to measure stress‘). The comparison is performed using feature vectors that represent the strain fields, see my post of a few weeks ago entitled ‘Recognising strain’. The guiding principle is that if damage is present but does not change the strain field then the structural performance of the component is unchanged; however when the strain field is changed then it is easier to predict remanent life from strain data than from morphology data. We have demonstrated that these new concepts work in glass-fibre reinforced laminates and are in the process of reproducing the results in carbon-fibre composites.

Sources

Patterson, E.A., Feligiotti, M., Hack, E., 2013, On the integration of validation, quality assurance and non-destructive evaluation, J. Strain Analysis, 48(1):48-59.

Patki, A.S., Patterson, E.A., 2012, Damage assessment of fibre reinforced composites using shape descriptors, J. Strain Analysis, 47(4):244-253.