Tag Archives: simulation

On the trustworthiness of multi-physics models

I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop.  In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured.  A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements.  The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014].  We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019].  Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments.  It is the fourth in a series of workshops held previously in Shanghai, London and Munich.  For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

In Einstein’s footprints?

Grand Hall of the Guild of Carpenters, Zurich

During the past week, I have been working with members of my research group on a series of papers for a conference in the USA that a small group of us will be attending in the summer.  Dissemination is an important step in the research process; there is no point in doing the research if we lock the results away in a desk drawer and forget about them.  Nowadays, the funding organisations that support our research expect to see a plan of dissemination as part of our proposals for research; and hence, we have an obligation to present our results to the scientific community as well as to communicate them more widely, for instance through this blog.

That’s all fine; but nevertheless, I don’t find most conferences a worthwhile experience.  Often, there are too many uncoordinated sessions running in parallel that contain presentations describing tiny steps forward in knowledge and understanding which fail to compel your attention [see ‘Compelling presentations‘ on March 21st, 2018].  Of course, they can provide an opportunity to network, especially for those researchers in the early stages of their careers; but, in my experience, they are rarely the location for serious intellectual discussion or debate.  This is more likely to happen in small workshops focussed on a ‘hot-topic’ and with a carefully selected eclectic mix of speakers interspersed with chaired discussion sessions.

I have been involved in organising a number of such workshops in Glasgow, London, Munich and Shanghai over the last decade.  The next one will be in Zurich in November 2019 in Guild Hall of Carpenters (Zunfthaus zur Zimmerleuten) where Einstein lectured in November 1910 to the Zurich Physical Society ‘On Boltzmann’s principle and some of its direct consequences‘.  Our subject will be different: ‘Validation of Computational Mechanics Models’; but we hope that the debate on credible models, multi-physics simulations and surviving with experimental data will be as lively as in 1910.  If you would like to contribute then download the pdf from this link; and if you just like to attend the one-day workshop then we will be announcing registration soon and there is no charge!

We have published the outcomes from some of our previous workshops:

Advances in Validation of Computational Mechanics Models (from the 2014 workshop in Munich), Journal of Strain Analysis, vol. 51, no.1, 2016

Strain Measurement in Extreme Environments (from the 2012 workshop in Glasgow), Journal of Strain Analysis, vol. 49, no. 4, 2014.

Validation of Computational Solid Mechanics Models (from the 2011 workshop in Shanghai), Journal of Strain Analysis, vol. 48, no.1, 2013.

The workshop is supported by the MOTIVATE project and further details are available at http://www.engineeringvalidation.org/4th-workshop

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.

Source:

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Establishing fidelity and credibility in tests & simulations (FACTS)

A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’.   One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making.  This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity.   These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators.  Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017].  Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.

These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts.  So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘.  Feel free to have a browse!