Category Archives: uncertainty

On the trustworthiness of multi-physics models

I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop.  In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured.  A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements.  The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014].  We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019].  Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments.  It is the fourth in a series of workshops held previously in Shanghai, London and Munich.  For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

Epistemic triage

A couple of weeks ago I wrote about epistemic dependence and the idea that we need to trust experts because we are unable to verify everything ourselves as life is too short and there are too many things to think about.  However, this approach exposes us to the risk of being misled and Julian Baggini has suggested that this risk is increasing with the growth of psychology, which has allowed more people to master methods of manipulating us, that has led to ‘a kind of arms race of deception in which truth is the main casualty.’  He suggests that when we are presented with new information then we should perform an epstemic triage by asking:

  • Is this a domain in which anyone can speak the truth?
  • What kind of expert is a trustworthy source of truth in that domain?
  • Is a particular expert to be trusted?

The deluge of information, which streams in front of our eyes when we look at the screens of our phones, computers and televisions, seems to leave most of us grasping for a hold on reality.  Perhaps we should treat it all as fiction until have performed Baggini’s triage, at least on the sources of the information streams, if not also the individual items of information.

Source:

Julian Baggini, A short history of truth: consolations for a post-truth world, London: Quercus Editions Ltd, 2017.

Establishing fidelity and credibility in tests & simulations (FACTS)

A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’.   One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making.  This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity.   These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators.  Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017].  Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.

These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts.  So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘.  Feel free to have a browse!

Tyranny of quantification

There is a growing feeling that our use of metrics is doing more harm than good.  My title today is a mis-quote from Rebecca Solnit; she actually said ‘tyranny of the quantifiable‘ or perhaps it is combination of her quote and the title of a new book by Jerry Muller: ‘The Tyranny of Metrics‘ that was reviewed in the FT Weekend on 27/28 January 2018 by Tim Harford, who recently published a book called Messy that dealt with similar issues, amongst other things.

I wrote ‘growing feeling’ and then almost fell into the trap of attempting to quantify the feeling by providing you with some evidence; but, I stopped short of trying to assign any numbers to the feeling and its growth – that would have been illogical since the definition of a feeling is ‘an emotional state or reaction, an idea or belief, especially a vague or irrational one’.

Harford puts it slightly differently: that ‘many of us have a vague sense that metrics are leading us astray, stripping away context, devaluing subtle human judgment‘.  Advances in sensors and the ubiquity of computing power allows vast amounts of data to be acquired and processed into metrics that can be ranked and used to make and justify decisions.  Data and consequently, empiricism is king.  Rationalism has been cast out into the wilderness.  Like Muller, I am not suggesting that metrics are useless, but that they are only one tool in decision-making and that they need to used by those with relevent expertise and experience in order to avoid unexpected consequences.

To quote Muller: ‘measurement is not an alternative to judgement: measurement demands judgement – judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available‘.

Sources:

Lunch with the FT – Rebecca Solnit by Rana Foroohar in FT Weekend 10/11 February 2018

Desperate measures by Tim Harford in FT Weekend 27/28 February 2018

Muller JZ, The Tyranny of Metrics, Princeton NJ: Princeton University Press, 2018.

Image: http://maxpixel.freegreatpicture.com/Measurement-Stopwatch-Timer-Clock-Symbol-Icon-2624277