Digital twins are becoming ubiquitous in many areas of engineering [see ‘Can you trust your digital twin?‘ on November 23rd, 2016]. Although at the same time, the terminology is becoming blurred as digital shadows and digital models are treated as if they are synonymous with digital twins. A digital model is a digitised replica of physical entity which lacks any automatic data exchange between the entity and its replica. A digital shadow is the digital representation of a physical object with a one-way flow of information from the object to its representation. But a digital twin is a functional representation with a live feedback loop to its counterpart in the real-world. The feedback loop is based on a continuous update to the digital twin about the condition and performance of the physical entity based on data from sensors and on analysis from the digital twin about the performance of the physical entity. This enables a digital twin to provide a service to many stakeholders. For example, the users of a digital twin of an aircraft engine could include the manufacturer, the operator, the maintenance providers and the insurers. These capabilities imply digital twins are themselves becoming products which exist in a digital context that might connect many digital products thus forming an integrated digital environment. I wrote about integrated digital environments when they were a concept and the primary challenges were technical in nature [see ‘Enabling or disruptive technology for nuclear engineering?‘ on January 28th, 2015]. Many of these technical challenges have been resolved and the next set of challenges are economic and commercial ones associated with launching digital twins into global markets that lack adequate understanding, legislation, security, regulation or governance for digital products. In collaboration with my colleagues at the Virtual Engineering Centre, we have recently published a white paper, entitled ‘Transforming digital twins into digital products that thrive in the real world‘ that reviews these issues and identifies the need to establish digital contexts that embrace the social, economic and technical requirements for the appropriate use of digital twins [see ‘Digital twins could put at risk what it means to be human‘ on November 18th, 2020].
Cars that run on air might seem like a fairy tale or an April Fools story; but it is possible to use air as a medium for storing energy by compressing it or liquifying it at -196°C. The MDI company in Luxembourg has been developing and building a compressed air engine which powers a small car, or Airpod 2.0 and a new industrial vehicle, the Air‘Volution. When the compressed air is allowed to expand, the energy stored in it is released and can be used to power the vehicle. The Airpod 2.0 weighs only 350 kg, has seats for two people, 400 litres of luggage space and an urban cycle range of 100 to 120 km at a top speed of 80 km/h. So, it is an urban runabout with zero emissions and no requirement for lithium, nickel or cobalt for batteries but a limited range. A couple of years ago I tasked an MSc student with a project to consider the practicalities of a car running on liquid air, based on the premise that it should be possible to store a higher density of energy in liquified air (about 290 kJ/litre) than in compressed air (about 100 kJ/litre). His concept design used a rolling piston engine to power a family car capable of carrying 5 passengers and 346 litres of luggage over a 160 km. So, his design carried a bigger payload for further than the Airpod 2.0; however, like the electric charging system described a few weeks ago [see ‘Innovative design too far ahead of the market’ on May 5th, 2021], the design never the left the drawing board.
Regulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe. The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment. Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable. Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP). The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead. These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide. Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals. In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans. In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market. The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications. This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations. [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers. Acceptance is more closely related to scientific credibility. I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models. As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016]. Trust requires a common knowledge base and understanding that is usually built through interactions. We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model.
Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.
Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.
Image: Extract from abstract by Zahrah Resh.
We held the kick-off meeting for a new research project this week. It’s a three-way collaboration involving three professors based in Portugal, the UK and USA [Chris Sutcliffe, John Lambros at UIUC and me]; so, our kick-off meeting should have involved at least two of us travelling to the laboratory of the third collaborator and spending some time brainstorming about the challenges that we have agreed to tackle over the next three years. Instead we had a call via Skype and a rather procedural meeting in which we covered all of the issues without really engendering any excitement or sparking any new ideas. It would appear that we need the stimulus of new environments to maximise our creativity and that we use body language as well as facial expressions to help us reach a friendly consensus on which crazy ideas are worth pursuing and which should be quietly forgotten.
Our new research project has a long title: ‘Thermoacoustic response of Additively Manufactured metals: A multi-scale study from grain to component scales‘. In simple terms, we are going to look at whether residual stresses could be designed to be beneficial to the performance of structural parts used in demanding environments such as those found in reusable spacecraft, hypersonic flight vehicles and breeder blankets in fusion reactors. Residual stresses are often induced during the manufacture of parts and are usually detrimental to the performance of the part. Our hypothesis is that in additive manufacturing, or 3D printing, we have sufficient control of the manufacture of the part that we can introduce ‘designer stresses’ which will improve the part’s performance in demanding environments. The research is funded jointly by the National Science Foundation (NSF) in the USA and the Engineering and Physical Sciences Research Council (EPSRC) in the UK and is supported by The MTC and Renishaw plc; you can find out more at Grants on the Web. The research will be building on our recent research on ‘Potential dynamic buckling in hypersonic vehicle skin‘ [posted July 1st, 2020] and earlier work, see ‘Hot stuff‘ on September 13th, 2012. While the demanding environment is not new to us, we will be using 3D printed parts for the first time instead of components made by conventional (subtractive) machining and taking them to higher temperatures.
The thumbnail shows measured modal shapes for a subtractively-manufactured plate subject to the three temperature regimes: room temperature (left), transverse heating of the centre of the plate (middle) and longitudinal heating on one edge (right) from Silva, A.S., Sebastian, C.M., Lambros, J. and Patterson, E.A., 2019. High temperature modal analysis of a non-uniformly heated rectangular plate: Experiments and simulations. J. Sound & Vibration, 443, pp.397-410.