It is about 35 years since I graduated with my PhD. It was not ground-breaking although, together with my supervisor, I did publish about half a dozen technical papers based on it and some of those papers are still being cited, including one this month which surprises me. I performed experiments and computer modelling on the load and stress distribution in threaded fasteners, or nuts and bolts. There were no digital cameras and no computer tomography; so, the experiments involved making and sectioning models of nuts and bolts in transparent plastic using three-dimensional photoelasticity [see ‘Art and Experimental Mechanics‘ on July 17th, 2012]. I took hundreds of photographs of the sections and scanned the negatives in a microdensitometer. The computer modelling was equally slow and laborious because there were no graphical user interfaces (GUI); instead, I had to type strings of numbers into a terminal, wait overnight while the calculations were performed, and then study reams of numbers printed out on long rolls of paper. The tedium of the experimental work inspired me to work on utilising digital technology to revolutionise the field of experimental mechanics over the following 15 to 20 years. In the past 15 to 20 years, I have moved back towards computer modelling and focused on transforming the way in which measurement data are used to improve the fidelity of computer models and to establish confidence in their predictions [see ‘Establishing fidelity and credibility in tests and simulations‘ on July 25th, 2018]. Since completing my PhD, I have supervised 32 students to successful completion of their PhDs. You might think that was a straightforward process of an initial three years for the first one to complete their research and write their thesis, followed by one graduating every year. But that is not how it worked out, instead I have had fallow years as well as productive years. At the moment, I am in a productive period, having graduated two PhD students per year since 2017 – that’s a lot of reading and I have spent much of the last two weekends reviewing a thesis which is why PhD theses are the topic of this post!
I spent most of last week at the European Union’s Joint Research Centre in Ispra, Italy. I have been collaborating with the scientists in the European Union Reference Laboratory for alternatives to animal testing [EURL ECVAM]. We have been working together on tracking nanoparticles and, more recently, on the validity and credibility of models. Last week I was there to participate in a workshop on Validation and Acceptance of Artificial Intelligence Models in Health. I presented our work on the credibility matrix and on a set of factors that we have developed for establishing trust in a model and its predictions. I left the JRC on Friday evening and slipped back in the UK just before she left the Europe Union. The departure of the UK from Europe reminds me of a novel by José Saramago called ‘The Stone Raft‘ in which the Iberian penisula breaks off from the Europe mainland and drifts around the Atlantic ocean. The bureaucrats in Europe have to run around dealing with the ensuing disruption while five people in Spain and Portugal are drawn together by surreal events on the stone raft adrift in the ocean.
I need to confess to writing a misleading post some months ago entitled ‘In Einstein’s footprints?‘ on February 27th 2019, in which I promoted our 4th workshop on the ‘Validation of Computational Mechanics Models‘ that we held last month at Guild Hall of Carpenters [Zunfthaus zur Zimmerleuten] in Zurich. I implied that speakers at the workshop would be stepping in Einstein’s footprints when they presented their research at the workshop, because Einstein presented a paper at the same venue in 1910. However, as our host in Zurich revealed in his introductory remarks , the Guild Hall was gutted by fire in 2007 and so we were meeting in a fake, or replica, which was so good that most of us had not realised. This was quite appropriate because a theme of the workshop was enhancing the credibility of computer models that are used to replicate the real-world. We discussed the issues surrounding the trustworthiness of models in a wide range of fields including aerospace engineering, biomechanics, nuclear power and toxicology. Many of the presentations are available on the website of the EU project MOTIVATE which organised and sponsored the workshop as part of its dissemination programme. While we did not solve any problems, we did broaden people’s understanding of the issues associated with trustworthiness of predictions and identified the need to develop common approaches to support regulatory decisions across a range of industrial sectors – that’s probably the theme for our 5th workshop!
The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.
The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.
A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’. One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making. This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity. These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators. Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017]. Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.
These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts. So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘. Feel free to have a browse!