Category Archives: FACTS

Crack tip plasticity in reactor steels

Amplitude of temperature in steel due to a cyclic load with a crack growing from left to right along the horizontal centre line with the stress concentration at its tip exhibiting the peak values. The wedge shapes in the left corners are part of the system.

At this time of year the flow into my inbox is augmented daily by prospective PhD students sending me long emails describing how their skills, qualifications and interests perfectly match the needs of my research group, or sometimes someone else’s group if they have not been careful in setting up their mass mailing.  At the moment, I have four PhD projects for which I am looking for outstanding students; so, because it will help prospective students and might interest my other readers but also because I am short of ideas for the blog, I plan to describe one project per week for the next month.

The first project is about the effect of hydrogen on crack tip plasticity in reactor steels.  Fatigue cracks grow in steels by coalescing imperfections in the microstructure of the material until small voids are formed in areas of high stress.  When these voids connect together a crack is formed.  Repeated loading and unloading of the material provides the energy to move the imperfections, known as dislocations, and geometric features in structures are stress concentrators which focus this energy causing cracks to be formed in their vicinity.  The movement of dislocations causes permanent, or plastic deformation of the material.  The sharp geometry of a crack tip becomes a stress concentrator creating a plastic zone in which dislocations pile up and voids form allowing the crack to extend [see post on ‘Alan Arnold Griffith‘ on April 26th, 2017].  It is possible to detect the thermal energy released during plastic deformation using a technique known as thermoelastic stress analysis [see ‘Counting photons to measure stress‘ on November 18th 2015] as well as to measure the stress field associated with the propagating crack [1].  One of my current PhD students has been using this technique to investigate the effect of irradiation damage on the growth of cracks in stainless steel used in nuclear reactors.  We use an ion accelerator at the Dalton Cumbrian Facility to introduce radiation damage into specimens the size of a postage stamp and afterwards apply cyclic loads and watch the fatigue crack grow using our sensitive infra-red cameras.  We have found that the irradiation reduced the rate of crack growth and we will be publishing a paper on it shortly [and a PhD thesis].  In the new project, our industrial sponsors want us to explore the effect of hydrogen on crack growth in irradiated steel, because the presence of hydrogen is known to accelerate fatigue crack growth [2] which is believe to happen as a result of hydrogen atoms disrupting the formation of dislocations at the microscale and localising plasticity at crack tip on the mesoscale.  However, these ideas have not been demonstrated in experiments, so we plan to do this using thermoelastic stress analysis and to investigate the combined influence of hydrogen and irradiation by developing a process for pre-charging the steel specimens with hydrogen using an electrolytic cell and irradiating them using the ion accelerator.  Both hydrogen and radiation are present in a nuclear reactor and hence the results will be relevant to predicting the safe working life of nuclear reactors.

The PhD project is fully-funded for UK and EU citizens as part of a Centre for Doctoral Training and will involve a year of specialist training followed by three years of research.  For more information following this link.


  1. Yang, Y., Crimp, M., Tomlinson, R.A., Patterson, E.A., 2012, Quantitative measurement of plastic strain field at a fatigue crack tip, Proc. R. Soc. A., 468(2144):2399-2415.
  2. Matsunaga, H., Takakuwa, O., Yamabe, J., & Matsuoka, S., 2017, Hydrogen-enhanced fatigue crack growth in steels and its frequency dependence. Phil. Trans. R. Soc. A, 375(2098), 20160412

Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.


Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Establishing fidelity and credibility in tests & simulations (FACTS)

A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’.   One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making.  This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity.   These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators.  Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017].  Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.

These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts.  So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘.  Feel free to have a browse!

Fourth industrial revolution

Have you noticed that we are in the throes of a fourth industrial revolution?

The first industrial revolution occurred towards the end of the 18th century with the introduction of steam power and mechanisation.  The second industrial revolution took place at the end of the 19th and beginning of the 20th century and was driven by the invention of electrical devices and mass production.  The third industrial revolution was brought about by computers and automation at the end of the 20th century.  The fourth industrial revolution is happening as result of combining physical and cyber systems.  It is also called Industry 4.0 and is seen as the integration of additive manufacturing, augmented reality, Big Data, cloud computing, cyber security, Internet of Things (IoT), simulation and systems engineering.  Most organisations are struggling with the integration process and, as a consequence, are only exploiting a fraction of the capabilities of the new technology.  Revolutions are, by their nature, disruptive and those organisations that embrace and exploit the innovations will benefit while the existence of the remainder is under threat [see [‘The disrupting benefit of innovation’ on May 23rd, 2018].

Our work on the Integrated Nuclear Digital Environment, on Digital Twins, in the MOTIVATE project and on hierarchical modelling in engineering and biology is all part of the revolution.

Links to these research posts:

Enabling or disruptive technology for nuclear engineering?’ on January 28th, 2015

Can you trust your digital twin?’ on November 23rd, 2016

Getting Smarter’ on June 21st, 2017

‘Hierarchical modelling in engineering and biology’ [March 14th, 2018]


Image: Christoph Roser at from [CC BY-SA 4.0].