Category Archives: uncertainty

Tyranny of quantification

There is a growing feeling that our use of metrics is doing more harm than good.  My title today is a mis-quote from Rebecca Solnit; she actually said ‘tyranny of the quantifiable‘ or perhaps it is combination of her quote and the title of a new book by Jerry Muller: ‘The Tyranny of Metrics‘ that was reviewed in the FT Weekend on 27/28 January 2018 by Tim Harford, who recently published a book called Messy that dealt with similar issues, amongst other things.

I wrote ‘growing feeling’ and then almost fell into the trap of attempting to quantify the feeling by providing you with some evidence; but, I stopped short of trying to assign any numbers to the feeling and its growth – that would have been illogical since the definition of a feeling is ‘an emotional state or reaction, an idea or belief, especially a vague or irrational one’.

Harford puts it slightly differently: that ‘many of us have a vague sense that metrics are leading us astray, stripping away context, devaluing subtle human judgment‘.  Advances in sensors and the ubiquity of computing power allows vast amounts of data to be acquired and processed into metrics that can be ranked and used to make and justify decisions.  Data and consequently, empiricism is king.  Rationalism has been cast out into the wilderness.  Like Muller, I am not suggesting that metrics are useless, but that they are only one tool in decision-making and that they need to used by those with relevent expertise and experience in order to avoid unexpected consequences.

To quote Muller: ‘measurement is not an alternative to judgement: measurement demands judgement – judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available‘.

Sources:

Lunch with the FT – Rebecca Solnit by Rana Foroohar in FT Weekend 10/11 February 2018

Desperate measures by Tim Harford in FT Weekend 27/28 February 2018

Muller JZ, The Tyranny of Metrics, Princeton NJ: Princeton University Press, 2018.

Image: http://maxpixel.freegreatpicture.com/Measurement-Stopwatch-Timer-Clock-Symbol-Icon-2624277

Less uncertain predictions

Ultrasound time-of-flight C-scan of the delaminations formed by a 12J impact on a crossply laminate (top) and the corresponding surface strain field (bottom).

Here is a challenge for you: overall this blog has a readability index of 8.6 using the Flesch Kincaid Grades, which means it should be easily understood by 14-15 year olds.  However, my editor didn’t understand the first draft of the post below and so I have revised it; but it still scores 15 using Flesch Kincaid!  So, it might require the formation of some larger scale neuronal assemblies in your brain [see my post entitled ‘Digital Hive Mind‘ on November 30th, 2016].

I wrote a couple of weeks ago about guessing the weight of a reader.  I used some national statistics and suggested how they could be updated using real data about readers’ weights with the help of Bayesian statistics [see my post entitled ‘Uncertainty about Bayesian statistics’ on July 5th, 2017].  It was an attempt to shed light on the topic of Bayesian statistics, which tends to be obscure or unknown.  I was stimulated by our own research using Bayesian statistics to predict the likelihood of failure in damaged components manufactured using composite material, such as carbon-fibre laminates used in the aerospace industry.  We are interested in the maximum load that can be carried by a carbon-fibre laminate after it has sustained some impact damage, such as might occur to an aircraft wing-skin that is hit by debris from the runway during take-off, which was the cause of the Concorde crash in Paris on July 25th, 2000.  The maximum safe load of the carbon-fibre laminate varies with the energy of the impact, as well as with the discrepancies introduced during its manufacture.  These multiple variables make our analysis more involved than I described for readers’ weights.  However, we have shown that the remaining strength of a damage laminate can be more reliably predicted from measurements of the change in the strain pattern around the damage than from direct measurements of the damage for instance, using ultrasound.

This might seem to be a counter-intuitive result.  However, it occurs because the failure of the laminate is driven by the energy available to create new surfaces as it fractures [see my blog on Griffith fracture on April 26th, 2017], and the strain pattern provides more information about the energy distribution than does the extent of the existing damage.  Why is this important – well, it offers a potentially more reliable approach to inspecting aircraft that could reduce operating costs and increase safety.

If you have stayed with me to the end, then well done!  If you want to read more, then see: Christian WJR, Patterson EA & DiazDelaO FA, Robust empirical predictions of residual performance of damaged composites with quantified uncertainties, J. Nondestruct. Eval. 36:36, 2017 (doi: 10.1007/s10921-017-0416-6).

Can you trust your digital twin?

Author's digital twin?

Author’s digital twin?

There is about a 3% probability that you have a twin. About 32 in 1000 people are one of a pair of twins.  At the moment an even smaller number of us have a digital twin but this is the direction in which computational biomedicine is moving along with other fields.  For instance, soon all aircraft will have digital twins and most new nuclear power plants.  Digital twins are computational representations of individual members of a population, or fleet, in the case of aircraft and power plants.  For an engineering system, its computer-aided design (CAD) is the beginning of its twin, to which information is added from the quality assurance inspections before it leaves the factory and from non-destructive inspections during routine maintenance, as well as data acquired during service operations from health monitoring.  The result is an integrated model and database, which describes the condition and history of the system from conception to the present, that can be used to predict its response to anticipated changes in its environment, its remaining useful life or the impact of proposed modifications to its form and function. It is more challenging to create digital twins of ourselves because we don’t have original design drawings or direct access to the onboard health monitoring system but this is being worked on. However, digital twins are only useful if people believe in the behaviour or performance that they predict and are prepared to make decisions based on the predictions, in other words if the digital twins possess credibility.  Credibility appears to be like beauty because it is in eye of the beholder.  Most modellers believe that their models are both beautiful and credible, after all they are their ‘babies’, but unfortunately modellers are not usually the decision-makers who often have a different frame of reference and set of values.  In my group, one current line of research is to provide metrics and language that will assist in conveying confidence in the reliability of a digital twin to non-expert decision-makers and another is to create methodologies for evaluating the evidence prior to making a decision.  The approach is different depending on the extent to which the underlying models are principled, i.e. based on the laws of science, and can be tested using observations from the real world.  In practice, even with principled, testable models, a digital twin will never be an identical twin and hence there will always be some uncertainty so that decisions remain a matter of judgement based on a sound understanding of the best available evidence – so you are always likely to need advice from a friendly engineer   🙂

Sources:

De Lange, C., 2014, Meet your unborn child – before it’s conceived, New Scientist, 12 April 2014, p.8.

Glaessgen, E.H., & Stargel, D.S., 2012, The digital twin paradigm for future NASA and US Air Force vehicles, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA paper 2012-2018, NF1676L-13293.

Patterson E.A., Feligiotti, M. & Hack, E., 2013, On the integration of validation, quality assurance and non-destructive evaluation, J. Strain Analysis, 48(1):48-59.

Patterson, E.A., Taylor, R.J. & Bankhead, M., 2016, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103.

Patterson EA & Whelan MP, 2016, A framework to establish credibility of computational models in biology, Progress in Biophysics & Molecular Biology, doi: 10.1016/j.pbiomolbio.2016.08.007.

Tuegel, E.J., 2012, The airframe digital twin: some challenges to realization, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference.

Coping with uncertainty

The first death of driver in a car while using Autopilot has been widely reported with much hyperbole though with a few notable exceptions, for instance Nick Bilton in Vanity Fair on July 7th, 2016 who pointed out that you were safer statistically in a Tesla with its Autopilot functioning than driving normally.  This is based on the fact that worldwide there is a fatality for every 60 million miles driven, or every 94 million miles in the US, whereas Joshua Brown’s tragic death was the first in 130 million miles driven by Teslas with Autopilot activated.  This implies that globally you are twice as likely to survive your next car journey in an autonomously driven Tesla than in a manually driven car.

If you decide to go by plane instead then the probability of arriving safely is extremely good because only one in every 3 million flights last year resulted in fatalities or put another way: 3.3 billion passengers were transported with the loss of 641 lives, which is a one in 5 million.  People worry about these probabilities while at the same time buying lottery tickets with a much lower probability of winning the jackpot, which is about 1 in 14 million in the UK.  In all of these cases, the probability is saying something about the frequency of occurance of these events.  We don’t know whether the plane will crash on the next flight we take so we rationalise this uncertainty by defining the frequency of flights that end in a fatal crash.  The French mathematician, Pierre-Simon Laplace (1749-1827) thought about probability as a measure of our ignorance or uncertainty.  As we have come to realise the extent of our uncertainty about many things in science (see my post: ‘Electron Uncertainty‘ on July 27th, 2016) and life (see my post: ‘Unexpected bad news for turkeys‘ on November 25th, 2015), the more important the concept of probability has become.   Caputo has argued that ‘a post-modern style demands a capacity to sustain uncertainty and instability, to live with the unforeseen and unpredictable as positive conditions of the possibility of an open-ended future’.  Most of us can manage this concept when the open-ended future is a lottery jackpot but struggle with the remaining uncertainties of life, particularly when presented with new ones, such as autonomous cars.

Sources:

Bilton, N., How the media screwed up the fatal Tesla accident, Vanity Fair, July 7th, 2016

IATA Safety Report 2014

Caputo JD, Truth: Philosophy in Transit, London: Penguin 2013.

Ball, J., How safe is air travel really? The Guardian, July 24th, 2014

Boagey, R., Who’s behind the wheel? Professional Engineering, 29(8):22-26, August 2016.