Tag Archives: model validation

Spatial-temporal models of protein structures

For a number of years I have been working on methods for validating computational models of structures [see ‘Model validation‘ on September 18th 2012] using the full potential of measurements made with modern techniques such as digital image correlation [see ‘256 shades of grey‘ on January 22nd 2014] and thermoelastic stress analysis [see ‘Counting photons to measure stress‘ on November 18th 2015].  Usually the focus of our interest is at the macroscale, for example the research on aircraft structures in the MOTIVATE project; however, in a new PhD project with colleagues at the National Tsing Hua University in Taiwan, we are planning to explore using our validation procedures and metrics [1] in structural biology.

The size and timescale of protein-structure thermal fluctuations are essential to the regulation of cellular functions. Measurement techniques such as x-ray crystallography and transmission electron cryomicroscopy (Cryo-EM) provide data on electron density distribution from which protein structures can be deduced using molecular dynamics models. Our aim is to develop our validation metrics to help identify, with a defined level of confidence, the most appropriate structural ensemble for a given set of electron densities. To make the problem more interesting and challenging the structure observed by x-ray crystallography is an average or equilibrium state because a folded protein is constantly in motion undergoing harmonic oscillations, each with different frequencies and amplitude [2].

The PhD project is part of the dual PhD programme of the University of Liverpool and National Tsing Hua University.  Funding is available in form of a fee waiver and contribution to living expenses for four years of study involving significant periods (perferably two years) at each university.  For more information follow this link.


[1] Dvurecenska, K., Graham, S., Patelli, E. & Patterson, E.A., A probabilistic metric for the validation of computational models, Royal Society Open Society, 5:180687, 2018.

[2] Justin Chan, Hong-Rui Lin, Kazuhiro Takemura, Kai-Chun Chang, Yuan-Yu Chang, Yasumasa Joti, Akio Kitao, Lee-Wei Yang. An efficient timer and sizer of protein motions reveals the time-scales of functional dynamics in the ribosome (2018) https://www.biorxiv.org/content/early/2018/08/03/384511.

Image: A diffraction pattern and protein structure from http://xray.bmc.uu.se/xtal/

In Einstein’s footprints?

Grand Hall of the Guild of Carpenters, Zurich

During the past week, I have been working with members of my research group on a series of papers for a conference in the USA that a small group of us will be attending in the summer.  Dissemination is an important step in the research process; there is no point in doing the research if we lock the results away in a desk drawer and forget about them.  Nowadays, the funding organisations that support our research expect to see a plan of dissemination as part of our proposals for research; and hence, we have an obligation to present our results to the scientific community as well as to communicate them more widely, for instance through this blog.

That’s all fine; but nevertheless, I don’t find most conferences a worthwhile experience.  Often, there are too many uncoordinated sessions running in parallel that contain presentations describing tiny steps forward in knowledge and understanding which fail to compel your attention [see ‘Compelling presentations‘ on March 21st, 2018].  Of course, they can provide an opportunity to network, especially for those researchers in the early stages of their careers; but, in my experience, they are rarely the location for serious intellectual discussion or debate.  This is more likely to happen in small workshops focussed on a ‘hot-topic’ and with a carefully selected eclectic mix of speakers interspersed with chaired discussion sessions.

I have been involved in organising a number of such workshops in Glasgow, London, Munich and Shanghai over the last decade.  The next one will be in Zurich in November 2019 in Guild Hall of Carpenters (Zunfthaus zur Zimmerleuten) where Einstein lectured in November 1910 to the Zurich Physical Society ‘On Boltzmann’s principle and some of its direct consequences‘.  Our subject will be different: ‘Validation of Computational Mechanics Models’; but we hope that the debate on credible models, multi-physics simulations and surviving with experimental data will be as lively as in 1910.  If you would like to contribute then download the pdf from this link; and if you just like to attend the one-day workshop then we will be announcing registration soon and there is no charge!

We have published the outcomes from some of our previous workshops:

Advances in Validation of Computational Mechanics Models (from the 2014 workshop in Munich), Journal of Strain Analysis, vol. 51, no.1, 2016

Strain Measurement in Extreme Environments (from the 2012 workshop in Glasgow), Journal of Strain Analysis, vol. 49, no. 4, 2014.

Validation of Computational Solid Mechanics Models (from the 2011 workshop in Shanghai), Journal of Strain Analysis, vol. 48, no.1, 2013.

The workshop is supported by the MOTIVATE project and further details are available at http://www.engineeringvalidation.org/4th-workshop

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

Finding DIMES

A couple of weeks ago I wrote about the ‘INSTRUCTIVE final reckoning’ (see post on January 9th).  INSTRUCTIVE was an EU project, which ended on December 31st, 2018  in which we demonstrated that infra-red cameras could be used to monitor the initiation and propagation of cracks in aircraft structures (see Middleton et al, 2019).  Now, we have seamlessly moved on to a new EU project, called DIMES (Development of Integrated MEasurement Systems), which started on January 1st, 2019.  To quote our EU documentation, the overall aim of DIMES is ‘to develop and demonstrate an automated measurement system that integrates a range of measurement approaches to enable damage and cracks to be detected and monitored as they originate at multi-material interfaces in an aircraft assembly’.  In simpler terms, we are going to take the results from the INSTRUCTIVE project, integrate them with other existing technologies for monitoring the structural health of an aircraft, and produce a system that can be installed in an aircraft fuselage and will provide early warning on the formation of cracks.  We have two years to achieve this target and demonstrate the system in a ground-based test on a real fuselage at an Airbus facility.  This was a scary prospect until we had our kick-off meeting and a follow-up brainstorming session a couple of weeks ago.  Now, it’s a little less scary.  If I have scared you with the prospect of cracks in aircraft, then do not be alarmed; we have been flying aircraft with cracks in them for years.  It is impossible to build an aircraft without cracks appearing, possibly during manufacturing and certainly in service – perfection (i.e. cracklessness) is unattainable and instead the stresses are maintained low enough to ensure undetected cracks will not grow (see ‘Alan Arnold Griffith’ on April 26th, 2017) and that detected ones are repaired before they propagate significantly (see ‘Aircraft inspection’ on October 10th, 2018).

I should explain that the ‘we’ above is the University of Liverpool and Strain Solutions Limited, who were the partners in INSTRUCTIVE, plus EMPA, the Swiss National Materials Laboratory, and Dantec Dynamics GmbH, a producer of scientific instruments in Ulm, Germany.  I am already working with these latter two organisations in the EU project MOTIVATE; so, we are a close-knit team who know and trust each other  – that’s one of the keys to successful collaborations tackling ambitious challenges with game-changing outcomes.

So how might the outcomes of DIMES be game-changing?  Well, at the moment, aircraft are designed using computer models that are comprehensively validated using measurement data from a large number of expensive experiments.  The MOTIVATE project is about reducing the number of experiments and increasing the quality and quantity of information gained from each experiment, i.e. ‘Getting Smarter’ (see post on June 21st 2017).  However, if the measurement system developed in DIMES allowed us to monitor in-flight strain fields in critical locations on-board an aircraft, then we would have high quality data to support future design work, which would allow further reductions in the campaign of experiments required to support new designs; and we would have continuous comprehensive monitoring of the structural integrity of every aircraft in the fleet, which would allow more efficient planning of maintenance as well as increased safety margins, or reductions in structural weight while maintaining safety margins.  This would be a significant step towards digital twins of aircraft (see ‘Fourth industrial revolution’ on July 4th, 2018 and ‘Can you trust your digital twin?’ on November 23rd, 2016).

The INSTRUCTIVE, MOTIVATE and DIMES projects have received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 685777, No. 754660 and No. 820951 respectively.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.


Middleton CA, Gaio A, Greene RJ & Patterson EA, Towards automated tracking of initiation and propagation of cracks in Aluminium alloy coupons using thermoelastic stress analysis, J. Non-destructive Testing, 38:18, 2019


Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.


Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.