Tag Archives: nuclear energy

Digital twins that thrive in the real-world

Decorative image

Windows of the Soul II [3D video art installation: http://www.haigallery.com/sonia-falcone/%5D

Digital twins are becoming ubiquitous in many areas of engineering [see ‘Can you trust your digital twin?‘ on November 23rd, 2016].  Although at the same time, the terminology is becoming blurred as digital shadows and digital models are treated as if they are synonymous with digital twins.  A digital model is a digitised replica of physical entity which lacks any automatic data exchange between the entity and its replica.  A digital shadow is the digital representation of a physical object with a one-way flow of information from the object to its representation.  But a digital twin is a functional representation with a live feedback loop to its counterpart in the real-world.  The feedback loop is based on a continuous update to the digital twin about the condition and performance of the physical entity based on data from sensors and on analysis from the digital twin about the performance of the physical entity.  This enables a digital twin to provide a service to many stakeholders.  For example, the users of a digital twin of an aircraft engine could include the manufacturer, the operator, the maintenance providers and the insurers.  These capabilities imply digital twins are themselves becoming products which exist in a digital context that might connect many digital products thus forming an integrated digital environment.  I wrote about integrated digital environments when they were a concept and the primary challenges were technical in nature [see ‘Enabling or disruptive technology for nuclear engineering?‘ on January 28th, 2015].  Many of these technical challenges have been resolved and the next set of challenges are economic and commercial ones associated with launching digital twins into global markets that lack adequate understanding, legislation, security, regulation or governance for digital products.  In collaboration with my colleagues at the Virtual Engineering Centre, we have recently published a white paper, entitled ‘Transforming digital twins into digital products that thrive in the real world‘ that reviews these issues and identifies the need to establish digital contexts that embrace the social, economic and technical requirements for the appropriate use of digital twins [see ‘Digital twins could put at risk what it means to be human‘ on November 18th, 2020].

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Digital twins could put at risk what it means to be human

Detail from abstract by Zahrah ReshI have written in the past about my research on the development and use of digital twins.  A digital twin is a functional representation in a virtual world of a real world entity that is continually updated with data from the real world [see ‘Fourth industrial revolution’ on July 4th, 2018 and also a short video at https://www.youtube.com/watch?v=iVS-AuSjpOQ].  I am working with others on developing an integrated digital nuclear environment from which digital twins of individual power stations could be spawned in parallel with the manufacture of their physical counterparts [see ‘Enabling or disruptive technology for nuclear engineering’ on January 1st, 2015 and ‘Digitally-enabled regulatory environment for fusion power-plants’ on March 20th, 2019].  A couple of months ago, I wrote about the difficulty of capturing tacit knowledge in digital twins, which is knowledge that is generally not expressed but is retained in the minds of experts and is often essential to developing and operating complex engineering systems [see ‘Tacit hurdle to digital twins’ on August 26th, 2020].  The concept of tapping into someone’s mind to extract tacit knowledge brings us close to thinking about human digital twins which so far have been restricted to computational models of various parts of human anatomy and physiology.  The idea of a digital twin of someone’s mind raises a myriad of philosophical and ethical issues.  Whilst the purpose of a digital twin of the mind of an operator of a complex system might be to better predict and understand human-machine interactions, the opportunity to use the digital twin to advance techniques of personalisation will likely be too tempting to ignore.  Personalisation is the tailoring of the digital world to respond to our personal needs, for instance using predictive algorithms to recommend what book you should read next or to suggest purchases to you.  At the moment, personalisation is driven by data derived from the tracks you make in the digital world as you surf the internet, watch videos and make purchases.  However, in the future, those predictive algorithms could be based on reading your mind, or at least its digital twin.  We worry about loss of privacy at the moment, by which we probably mean the collation of vast amounts of data about our lives by unaccountable organisations, and it worries us because of the potential for manipulation of our lives without us being aware it is happening.  Our free will is endangered by such manipulation but it might be lost entirely to a digital twin of our mind.  To quote the philosopher Michael Lynch, you would be handing over ‘privileged access to your mental states’ and to some extent you would no longer be a unique being.  We are long way from possessing the technology to realise a digital twin of human mind but the possibility is on the horizon.

Source: Richard Waters, They’re watching you, FT Weekend, 24/25 October 2020.

Image: Extract from abstract by Zahrah Resh.

Scattering electrons reveal dislocations in material structure

Figure 9 from Yang et al, 2012. Map of plastic strain around the crack tip (0, 0) based on the full width of half the maximum of the discrete Fourier transforms of BSE images, together with thermoelastic stress analysis data (white line) and estimates of the plastic zone size based on approaches of Dugdale's (green line) and Irwin's (blue line; dimensions in millimetres).

Figure 9 from Yang et al, 2012. Map of plastic strain around the crack tip (0, 0) based on the full width of half the maximum of the discrete Fourier transforms of BSE images, together with thermoelastic stress analysis data (white line) and estimates of the plastic zone size based on approaches of Dugdale’s (green line) and Irwin’s (blue line; dimensions in millimetres).

It is almost impossible to manufacture metal components that are flawless.  Every flaw or imperfection in a metallic component is a potential site for the initiation of a crack that could lead to the failure of the component [see ‘Alan Arnold Griffith’ on April 26th, 2017].  Hence, engineers are very interested in understanding the mechanisms of crack initiation and propagation so that these processes can be prevented or, at least, inhibited.  It is relatively easy to achieve these outcomes by not applying loads that would supply the energy to drive failure processes; however, the very purpose of a metal component is often to carry load and hence a compromise must be reached.  The deep understanding of crack initiation and propagation, required for an effective and safe compromise, needs detailed measurements of evolution of the crack and of its advancing front or tip [depending whether you are thinking in three- or two-dimensions].  When a metal is subjected to repeated cycles of loading, then a crack can grow incrementally with each load cycle; and in these conditions a small volume of material, just ahead of the crack and into which the crack is about to grow, has an important role in determining the rate of crack growth.  The sharp geometry of the crack tip causes localisation of the applied load in the material ahead of the crack thus raising the stress sufficiently high to cause permanent deformation in the material on the macroscale.  The region of permanent deformation is known as the crack tip plastic zone.  The permanent deformation induces disruptions in the regular packing of the metal atoms or crystal lattice, which are known as dislocations and continued cyclic loading causes the dislocations to move and congregate around the crack tip.  Ultimately, dislocations combine to form voids in the material and then voids coalesce to form the next extension of the crack.  In reality, it is an oversimplification to refer to a crack tip because there is a continuous transition from a definite crack to definitely no crack via a network of loosely connected voids, unconnected voids, aggregated dislocations almost forming a void, to a progressively more dispersed crowd of dislocations and finally virgin or undamaged material.  If you know where to look on a polished metal surface then you could probably see a crack about 1 mm in length and, with aid of an optical microscope, you could probably see the larger voids forming in the material ahead of the crack especially when a load is applied to open the crack.  However, dislocations are very small, of the order tens of nanometres in steel, and hence not visible in an optical microscope because they are smaller than the wavelength of light.  When dislocations congregate in the plastic zone ahead of the crack, they disturb the surface of the metal and causing a change its texture which can be detected in the pattern produced by electrons bouncing off the surface.  At Michigan State University about ten years ago, using backscattered electron (BSE) images produced in a scanning electron microscope (SEM), we demonstrated that the change in texture could be measured and quantified by evaluating the frequency content of the images using a discrete Fourier transform (DFT).  We collected 225 square images arranged in a chessboard pattern covering a 2.8 mm by 2.8 mm square around a 5 mm long crack in a titanium specimen which allowed us to map the plastic zone associated with the crack tip (figure 9 from Yang et al, 2012).  The length of the side of each image was 115 microns and 345 pixels so that we had 3 pixels per micron which was sufficient to resolve the texture changes in the metal surface due to dislocation density.  The images are from our paper published in the Proceedings of the Royal Society and the one below (figure 4 from Yang et al, 2012) shows four BSE images along the top at increasing distances from the crack tip moving from left to right.  The middle row shows the corresponding results from the discrete Fourier transform that illustrate the decreasing frequency content of the images moving from left to right, i.e. with distance from the crack.  The graphs in the bottom row show the profile through the centre of the DFTs.  The grain structure in the metal can be seen in the BSE images and looks like crazy paving on a garden path or patio.  Each grain has a particular and continuous crystal lattice orientation which causes the electrons to scatter differently from it compared to its neighbour.  We have used the technique to verify measurements of the extent of the crack tip plastic zone made using thermoelastic stress analysis (TSA) and then used TSA to study ‘Crack tip plasticity in reactor steels’ [see post on March 13th, 2019].

Figure 4 from Yang et al, 2012. (a) Backscattered electron images at increasing distance from crack from left to right; (b) their corresponding discrete Fourier transforms (DFTs) and (c) a horizontal line profile across the centre of each DFT.

Figure 4 from Yang et al, 2012. (a) Backscattered electron images at increasing distance from crack from left to right; (b) their corresponding discrete Fourier transforms (DFTs) and (c) a horizontal line profile across the centre of each DFT.

Reference: Yang, Y., Crimp, M., Tomlinson, R.A., Patterson, E.A., 2012, Quantitative measurement of plastic strain field at a fatigue crack tip, Proc. R. Soc. A., 468(2144):2399-2415.