I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop. In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured. A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements. The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014]. We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019]. Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments. It is the fourth in a series of workshops held previously in Shanghai, London and Munich. For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].
For a number of years I have been working on methods for validating computational models of structures [see ‘Model validation‘ on September 18th 2012] using the full potential of measurements made with modern techniques such as digital image correlation [see ‘256 shades of grey‘ on January 22nd 2014] and thermoelastic stress analysis [see ‘Counting photons to measure stress‘ on November 18th 2015]. Usually the focus of our interest is at the macroscale, for example the research on aircraft structures in the MOTIVATE project; however, in a new PhD project with colleagues at the National Tsing Hua University in Taiwan, we are planning to explore using our validation procedures and metrics  in structural biology.
The size and timescale of protein-structure thermal fluctuations are essential to the regulation of cellular functions. Measurement techniques such as x-ray crystallography and transmission electron cryomicroscopy (Cryo-EM) provide data on electron density distribution from which protein structures can be deduced using molecular dynamics models. Our aim is to develop our validation metrics to help identify, with a defined level of confidence, the most appropriate structural ensemble for a given set of electron densities. To make the problem more interesting and challenging the structure observed by x-ray crystallography is an average or equilibrium state because a folded protein is constantly in motion undergoing harmonic oscillations, each with different frequencies and amplitude .
The PhD project is part of the dual PhD programme of the University of Liverpool and National Tsing Hua University. Funding is available in form of a fee waiver and contribution to living expenses for four years of study involving significant periods (perferably two years) at each university. For more information follow this link.
 Dvurecenska, K., Graham, S., Patelli, E. & Patterson, E.A., A probabilistic metric for the validation of computational models, Royal Society Open Society, 5:180687, 2018.
 Justin Chan, Hong-Rui Lin, Kazuhiro Takemura, Kai-Chun Chang, Yuan-Yu Chang, Yasumasa Joti, Akio Kitao, Lee-Wei Yang. An efficient timer and sizer of protein motions reveals the time-scales of functional dynamics in the ribosome (2018) https://www.biorxiv.org/content/early/2018/08/03/384511.
Image: A diffraction pattern and protein structure from http://xray.bmc.uu.se/xtal/
During the past week, I have been working with members of my research group on a series of papers for a conference in the USA that a small group of us will be attending in the summer. Dissemination is an important step in the research process; there is no point in doing the research if we lock the results away in a desk drawer and forget about them. Nowadays, the funding organisations that support our research expect to see a plan of dissemination as part of our proposals for research; and hence, we have an obligation to present our results to the scientific community as well as to communicate them more widely, for instance through this blog.
That’s all fine; but nevertheless, I don’t find most conferences a worthwhile experience. Often, there are too many uncoordinated sessions running in parallel that contain presentations describing tiny steps forward in knowledge and understanding which fail to compel your attention [see ‘Compelling presentations‘ on March 21st, 2018]. Of course, they can provide an opportunity to network, especially for those researchers in the early stages of their careers; but, in my experience, they are rarely the location for serious intellectual discussion or debate. This is more likely to happen in small workshops focussed on a ‘hot-topic’ and with a carefully selected eclectic mix of speakers interspersed with chaired discussion sessions.
I have been involved in organising a number of such workshops in Glasgow, London, Munich and Shanghai over the last decade. The next one will be in Zurich in November 2019 in Guild Hall of Carpenters (Zunfthaus zur Zimmerleuten) where Einstein lectured in November 1910 to the Zurich Physical Society ‘On Boltzmann’s principle and some of its direct consequences‘. Our subject will be different: ‘Validation of Computational Mechanics Models’; but we hope that the debate on credible models, multi-physics simulations and surviving with experimental data will be as lively as in 1910. If you would like to contribute then download the pdf from this link; and if you just like to attend the one-day workshop then we will be announcing registration soon and there is no charge!
We have published the outcomes from some of our previous workshops:
Advances in Validation of Computational Mechanics Models (from the 2014 workshop in Munich), Journal of Strain Analysis, vol. 51, no.1, 2016
Strain Measurement in Extreme Environments (from the 2012 workshop in Glasgow), Journal of Strain Analysis, vol. 49, no. 4, 2014.
Validation of Computational Solid Mechanics Models (from the 2011 workshop in Shanghai), Journal of Strain Analysis, vol. 48, no.1, 2013.
The workshop is supported by the MOTIVATE project and further details are available at http://www.engineeringvalidation.org/4th-workshop
In his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts. This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters. However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems. Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research. This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements. We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models. Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.