Tag Archives: Royal Society

Spatio-temporal damage maps for composite materials

Earlier this year, my group published a new technique for illustrating the development of damage as a function of both space and time in materials during testing in a laboratory.  The information is presented in a damage-time map and shows where and when damage appears in the material.  The maps are based on the concept that damage represents a change in the structure of the material and, hence, produces changes in the load paths or stress distribution in the material.  We can use any of a number of optical techniques to measure strain, which is directly related to stress, across the surface of the material; and then look for changes in the strain distribution in real-time.  Wherever a permanent change is seen to occur there must also be permanent deformation or damage. We use image decomposition techniques that we developed some time ago [see ‘Recognizing strain‘ on October 28th, 2018], to identify the changes. Our damage-time maps remove the need for skilled operators to spend large amounts of time reviewing data and making subjective decisions.  They also allow a large amount of information to be presented in a single image which makes detailed comparisons with computer predictions easier and more readily quantifiable that, in turn, supports the validation of computational models [see ‘Model validation‘ on September 18th, 2012].

The structural integrity of composite materials is an on-going area of research because we only have a limited understanding of these materials.  It is easy to design structures using materials that have a uniform or homogeneous structure and mechanical properties which do not vary with orientation, i.e. isotropic properties.  For simple components, an engineer can predict the stresses and likely failure modes using the laws of physics, a pencil and paper plus perhaps a calculator.  However, when materials contain fibres embedded in a matrix, such as carbon-fibres in an epoxy resin, then the analysis of structural behaviour becomes much more difficult due to the interaction between the fibres and with the matrix.  Of course, these interactions are also what make these composite materials interesting because they allow less material to be used to achieve the same performance as homogeneous isotropic materials.  There are very many ways of arranging fibres in a matrix as well as many different types of fibres and matrix; and, engineers do not understand most of their interactions nor the mechanisms that lead to failure.

The image shows, on the left, the maximum principal strain in a composite specimen loaded longitudinally in tension to just before failure; and, on the right, the corresponding damage-time map indicating when and where damage developing during the tension loading.

Source:

Christian WJR, Dvurecenska K, Amjad K, Pierce J, Przybyla C & Patterson EA, Real-time quantification of damage in structural materials during mechanical testing, Royal Society Open Science, 7:191407, 2020.

Size matters

Most of us have a sub-conscious understanding of the forces that control the interaction of objects in the size scale in which we exist, i.e. from millimetres through to metres.  In this size scale gravitational and inertial forces dominate the interactions of bodies.  However, at the size scale that we cannot see, even when we use an optical microscope, the forces that the dominate the behaviour of objects interacting with one another are different.  There was a hint of this change in behaviour observed in our studies of the diffusion of nanoparticles [see ‘Slow moving nanoparticles‘ on December 13th, 2017], when we found that the movement of nanoparticles less than 100 nanometres in diameter was independent of their size.  Last month we published another article in one of the Nature journals, Scientific Reports, on ‘The influence of inter-particle forces on diffusion at the nanoscale‘, in which we have demonstrated by experiment that Van der Waals forces and electrostatic forces are the dominant forces at the nanoscale.  These forces control the diffusion of nanoparticles as well as surface adhesion, friction and colloid stability.  This finding is significant because the ionic strength of the medium in which the particles are moving will influence the strength of these forces and hence the behaviour of the nanopartices.  Since biological fluids contain ions, this will be important in understanding and predicting the behaviour of nanoparticles in biological applications where they might be used for drug delivery, or have a toxicological impact, depending on their composition.

Van der Waals forces are weak attractive forces between uncharged molecules that are distance dependent.  They are named after a Dutch physicist, Johannes Diderik van der Waals (1837-1923).  Electrostatic forces occur between charged particles or molecules and are usually repulsive with the result that van der Waals and electrostatic forces can balance each other, or not depending on the circumstances.

Sources:

Giorgi F, Coglitore D, Curran JM, Gilliland D, Macko P, Whelan M, Worth A & Patterson EA, The influence of inter-particle forces on diffusion at the nanoscale, Scientific Reports, 9:12689, 2019.

Coglitore D, Edwardson SP, Macko P, Patterson EA, Whelan MP, Transition from fractional to classical Stokes-Einstein behaviour in simple fluids, Royal Society Open Science, 4:170507, 2017. doi: .

Patterson EA & Whelan MP, Tracking nanoparticles in an optical microscope using caustics. Nanotechnology, 19 (10): 105502, 2009.

Image: from Giorgi et al 2019, figure 1 showing a photograph of a caustic (top) generated by a 50 nm gold nanoparticle in water taken with the optical microscope adjusted for Kohler illumination and closing the condenser field aperture to its minimum following method of Patterson and Whelan with its 2d random walk over a period of 3 seconds superimposed and a plot of the same walk (bottom).

Thought leadership in fusion engineering

The harnessing of fusion energy has become something of a holy grail – sought after by many without much apparent progress.  It is the energy process that ‘powers’ the stars and if we could reproduce it on earth in a controlled environment then it would offer almost unlimited energy with very low environmental costs.  However, understanding the science is an enormous challenge and the engineering task to design, build and operate a fusion-fuelled power station is even greater.  The engineering difficulties originate from the combination of two factors: the emergent behaviour present in the complex system and that it has never been done before.  Engineering has achieved lots of firsts but usually through incremental development; however, with fusion energy it would appear that it will only work when all of the required conditions are present.  In other words, incremental development is not viable and we need everything ready before flicking the switch.  Not surprisingly, engineers are cautious about flicking switches when they are not sure what will happen.  Yet, the potential benefits of getting it right are huge; so, we would really like to do it.  Hence, the holy grail status: much sought after and offering infinite abundance.

Last week I joined the search, or at least offered guidance to those searching, by publishing an article in Royal Society Open Science on ‘An integrated digital framework for the design, build and operation of fusion power plants‘.  Working with colleagues at the Culham Centre for Fusion Energy, Richard Taylor and I have taken our earlier work on an integrated nuclear digital environment for the nuclear energy using fission [see ‘Enabling or disruptive technology for nuclear engineering?‘ on january 28th, 2015] and combined it with the hierarchical pyramid of testing and simulation used in the aerospace industry [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018] to create a framework that can be used to guide the exploration of large design domains using computational models within a distributed and collaborative community of engineers and scientists.  We hope it will shorten development times, reduce design and build costs, and improve credibility, operability, reliability and safety.  It is a long list of potential benefits for a relatively simple idea in a relatively short paper (only 12 pages).  Follow the link to find out more – it is an open access paper, so it’s free.

References

Patterson EA, Taylor RJ & Bankhead M, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016.

Patterson EA, Purdie S, Taylor RJ & Waldon C, An integrated digital framework for the design, build and operation of fusion power plants, Royal Society Open Science, 6(10):181847, 2019.

Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.

Source:

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.