Tag Archives: uncertainty

Tyranny of quantification

There is a growing feeling that our use of metrics is doing more harm than good.  My title today is a mis-quote from Rebecca Solnit; she actually said ‘tyranny of the quantifiable‘ or perhaps it is combination of her quote and the title of a new book by Jerry Muller: ‘The Tyranny of Metrics‘ that was reviewed in the FT Weekend on 27/28 January 2018 by Tim Harford, who recently published a book called Messy that dealt with similar issues, amongst other things.

I wrote ‘growing feeling’ and then almost fell into the trap of attempting to quantify the feeling by providing you with some evidence; but, I stopped short of trying to assign any numbers to the feeling and its growth – that would have been illogical since the definition of a feeling is ‘an emotional state or reaction, an idea or belief, especially a vague or irrational one’.

Harford puts it slightly differently: that ‘many of us have a vague sense that metrics are leading us astray, stripping away context, devaluing subtle human judgment‘.  Advances in sensors and the ubiquity of computing power allows vast amounts of data to be acquired and processed into metrics that can be ranked and used to make and justify decisions.  Data and consequently, empiricism is king.  Rationalism has been cast out into the wilderness.  Like Muller, I am not suggesting that metrics are useless, but that they are only one tool in decision-making and that they need to used by those with relevent expertise and experience in order to avoid unexpected consequences.

To quote Muller: ‘measurement is not an alternative to judgement: measurement demands judgement – judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available‘.

Sources:

Lunch with the FT – Rebecca Solnit by Rana Foroohar in FT Weekend 10/11 February 2018

Desperate measures by Tim Harford in FT Weekend 27/28 February 2018

Muller JZ, The Tyranny of Metrics, Princeton NJ: Princeton University Press, 2018.

Image: http://maxpixel.freegreatpicture.com/Measurement-Stopwatch-Timer-Clock-Symbol-Icon-2624277

Coping with uncertainty

The first death of driver in a car while using Autopilot has been widely reported with much hyperbole though with a few notable exceptions, for instance Nick Bilton in Vanity Fair on July 7th, 2016 who pointed out that you were safer statistically in a Tesla with its Autopilot functioning than driving normally.  This is based on the fact that worldwide there is a fatality for every 60 million miles driven, or every 94 million miles in the US, whereas Joshua Brown’s tragic death was the first in 130 million miles driven by Teslas with Autopilot activated.  This implies that globally you are twice as likely to survive your next car journey in an autonomously driven Tesla than in a manually driven car.

If you decide to go by plane instead then the probability of arriving safely is extremely good because only one in every 3 million flights last year resulted in fatalities or put another way: 3.3 billion passengers were transported with the loss of 641 lives, which is a one in 5 million.  People worry about these probabilities while at the same time buying lottery tickets with a much lower probability of winning the jackpot, which is about 1 in 14 million in the UK.  In all of these cases, the probability is saying something about the frequency of occurance of these events.  We don’t know whether the plane will crash on the next flight we take so we rationalise this uncertainty by defining the frequency of flights that end in a fatal crash.  The French mathematician, Pierre-Simon Laplace (1749-1827) thought about probability as a measure of our ignorance or uncertainty.  As we have come to realise the extent of our uncertainty about many things in science (see my post: ‘Electron Uncertainty‘ on July 27th, 2016) and life (see my post: ‘Unexpected bad news for turkeys‘ on November 25th, 2015), the more important the concept of probability has become.   Caputo has argued that ‘a post-modern style demands a capacity to sustain uncertainty and instability, to live with the unforeseen and unpredictable as positive conditions of the possibility of an open-ended future’.  Most of us can manage this concept when the open-ended future is a lottery jackpot but struggle with the remaining uncertainties of life, particularly when presented with new ones, such as autonomous cars.

Sources:

Bilton, N., How the media screwed up the fatal Tesla accident, Vanity Fair, July 7th, 2016

IATA Safety Report 2014

Caputo JD, Truth: Philosophy in Transit, London: Penguin 2013.

Ball, J., How safe is air travel really? The Guardian, July 24th, 2014

Boagey, R., Who’s behind the wheel? Professional Engineering, 29(8):22-26, August 2016.

Innovation out of chaos

Picture1‘We are managing in chaos…our competition never knows what we are going to come up with next.  The fact is neither do we.’  This a quote from the 1996 UK Innovation Lecture given by William Coyne who was VP for Research at 3M at the time.  I used it a couple of months ago at a technical conference, where I was invited to be a panel member for discussion on innovation.  This state of chaos from which innovation arises is characteristic of ‘organic’ organizations that lack formal job definitions, encourage lateral interactions and greater responsibility for individuals.  Conversely, innovation is stifled in ‘mechanistic’ organizations that are characterized by specialisms, powerful functional roles, vertical management interactions, a command hierarchy and a complex organizational chart.

So, I suggested that innovation can be stimulated by removing or loosening organizational and intellectual constraints.  The latter means allowing people to think differently and not hiring people who look or think like you.  Of course, this is not easy – it requires a subtle balance of sustainable orderliness! However, as a member of the audience remarked ‘innovative organizations have fun!’.  And maybe this gets to the heart of the issue, too much order leads to boring predictability while too much disorder is scary but the right level of disorder or entropy is exciting and stimulates creativity.

Source:

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.

entropy_vector

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.