Tag Archives: uncertainty

Tyranny of quantification

There is a growing feeling that our use of metrics is doing more harm than good.  My title today is a mis-quote from Rebecca Solnit; she actually said ‘tyranny of the quantifiable‘ or perhaps it is combination of her quote and the title of a new book by Jerry Muller: ‘The Tyranny of Metrics‘ that was reviewed in the FT Weekend on 27/28 January 2018 by Tim Harford, who recently published a book called Messy that dealt with similar issues, amongst other things.

I wrote ‘growing feeling’ and then almost fell into the trap of attempting to quantify the feeling by providing you with some evidence; but, I stopped short of trying to assign any numbers to the feeling and its growth – that would have been illogical since the definition of a feeling is ‘an emotional state or reaction, an idea or belief, especially a vague or irrational one’.

Harford puts it slightly differently: that ‘many of us have a vague sense that metrics are leading us astray, stripping away context, devaluing subtle human judgment‘.  Advances in sensors and the ubiquity of computing power allows vast amounts of data to be acquired and processed into metrics that can be ranked and used to make and justify decisions.  Data and consequently, empiricism is king.  Rationalism has been cast out into the wilderness.  Like Muller, I am not suggesting that metrics are useless, but that they are only one tool in decision-making and that they need to used by those with relevent expertise and experience in order to avoid unexpected consequences.

To quote Muller: ‘measurement is not an alternative to judgement: measurement demands judgement – judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available‘.

Sources:

Lunch with the FT – Rebecca Solnit by Rana Foroohar in FT Weekend 10/11 February 2018

Desperate measures by Tim Harford in FT Weekend 27/28 February 2018

Muller JZ, The Tyranny of Metrics, Princeton NJ: Princeton University Press, 2018.

Image: http://maxpixel.freegreatpicture.com/Measurement-Stopwatch-Timer-Clock-Symbol-Icon-2624277

How many repeats do we need?

This is a question that both my undergraduate students and a group of taught post-graduates have struggled with this month.  In thermodynamics, my undergraduate students were estimating absolute zero in degrees Celsius using a simple manometer and a digital thermometer (this is an experiment from my MOOC: Energy – Thermodynamics in Everyday Life).  They needed to know how many times to repeat the experiment in order to determine whether their result was significantly different to the theoretical value: -273 degrees Celsius [see my post entitled ‘Arbitrary zero‘ on February 13th, 2013 and ‘Beyond  zero‘ the following week]. Meanwhile, the post-graduate students were measuring the strain distribution in a metal plate with a central hole that was loaded in tension. They needed to know how many times to repeat the experiment to obtain meaningful results that would allow a decision to be made about the validity of their computer simulation of the experiment [see my post entitled ‘Getting smarter‘ on June 21st, 2017].

The simple answer is six repeats are needed if you want 98% confidence in the conclusion and you are happy to accept that the margin of error and the standard deviation of your sample are equal.  The latter implies that error bars of the mean plus and minus one standard deviation are also 98% confidence limits, which is often convenient.  Not surprisingly, only a few undergraduate students figured that out and repeated their experiment six times; and the post-graduates pooled their data to give them a large enough sample size.

The justification for this answer lies in an equation that relates the number in a sample, n to the margin of error, MOE, the standard deviation of the sample, σ, and the shape of the normal distribution described by the z-score or z-statistic, z*: The margin of error, MOE, is the maximum expected difference between the true value of a parameter and the sample estimate of the parameter which is usually the mean of the sample.  While the standard deviation, σ,  describes the difference between the data values in the sample and the mean value of the sample, μ.  If we don’t know one of these quantities then we can simplify the equation by assuming that they are equal; and then n ≥ (z*)².

The z-statistic is the number of standard deviations from the mean that a data value lies, i.e, the distance from the mean in a Normal distribution, as shown in the graphic [for more on the Normal distribution, see my post entitled ‘Uncertainty about Bayesian methods‘ on June 7th, 2017].  We can specify its value so that the interval defined by its positive and negative value contains 98% of the distribution.  The values of z for 90%, 95%, 98% and 99% are shown in the table in the graphic with corresponding values of (z*)², which are equivalent to minimum values of the sample size, n (the number of repeats).

Confidence limits are defined as: but when n = , this simplifies to μ ± σ.  So, with a sample size of six (6 = n   for 98% confidence) we can state with 98% confidence that there is no significant difference between our mean estimate and the theoretical value of absolute zero when that difference is less than the standard deviation of our six estimates.

BTW –  the apparatus for the thermodynamics experiments costs less than £10.  The instruction sheet is available here – it is not quite an Everyday Engineering Example but the experiment is designed to be performed in your kitchen rather than a laboratory.