Digital twins are becoming ubiquitous in many areas of engineering [see ‘Can you trust your digital twin?‘ on November 23rd, 2016]. Although at the same time, the terminology is becoming blurred as digital shadows and digital models are treated as if they are synonymous with digital twins. A digital model is a digitised replica of physical entity which lacks any automatic data exchange between the entity and its replica. A digital shadow is the digital representation of a physical object with a one-way flow of information from the object to its representation. But a digital twin is a functional representation with a live feedback loop to its counterpart in the real-world. The feedback loop is based on a continuous update to the digital twin about the condition and performance of the physical entity based on data from sensors and on analysis from the digital twin about the performance of the physical entity. This enables a digital twin to provide a service to many stakeholders. For example, the users of a digital twin of an aircraft engine could include the manufacturer, the operator, the maintenance providers and the insurers. These capabilities imply digital twins are themselves becoming products which exist in a digital context that might connect many digital products thus forming an integrated digital environment. I wrote about integrated digital environments when they were a concept and the primary challenges were technical in nature [see ‘Enabling or disruptive technology for nuclear engineering?‘ on January 28th, 2015]. Many of these technical challenges have been resolved and the next set of challenges are economic and commercial ones associated with launching digital twins into global markets that lack adequate understanding, legislation, security, regulation or governance for digital products. In collaboration with my colleagues at the Virtual Engineering Centre, we have recently published a white paper, entitled ‘Transforming digital twins into digital products that thrive in the real world‘ that reviews these issues and identifies the need to establish digital contexts that embrace the social, economic and technical requirements for the appropriate use of digital twins [see ‘Digital twins could put at risk what it means to be human‘ on November 18th, 2020].
Cars that run on air might seem like a fairy tale or an April Fools story; but it is possible to use air as a medium for storing energy by compressing it or liquifying it at -196°C. The MDI company in Luxembourg has been developing and building a compressed air engine which powers a small car, or Airpod 2.0 and a new industrial vehicle, the Air‘Volution. When the compressed air is allowed to expand, the energy stored in it is released and can be used to power the vehicle. The Airpod 2.0 weighs only 350 kg, has seats for two people, 400 litres of luggage space and an urban cycle range of 100 to 120 km at a top speed of 80 km/h. So, it is an urban runabout with zero emissions and no requirement for lithium, nickel or cobalt for batteries but a limited range. A couple of years ago I tasked an MSc student with a project to consider the practicalities of a car running on liquid air, based on the premise that it should be possible to store a higher density of energy in liquified air (about 290 kJ/litre) than in compressed air (about 100 kJ/litre). His concept design used a rolling piston engine to power a family car capable of carrying 5 passengers and 346 litres of luggage over a 160 km. So, his design carried a bigger payload for further than the Airpod 2.0; however, like the electric charging system described a few weeks ago [see ‘Innovative design too far ahead of the market’ on May 5th, 2021], the design never the left the drawing board.
While pandemic lockdowns and travel bans are having a severe impact on spontaneity and creativity in research [see ‘Lacking creativity‘ on October 28th, 2020], they have induced a high level of ingenuity to achieve the final objective of the DIMES project, which is to conduct prototype demonstrations and evaluation tests of the DIMES integrated measurement system. We have gone beyond the project brief by developing a remote installation system that allows local engineers at a test site to successfully set-up and run our measurement system. This has saved thousands of airmiles and several tonnes of CO2 emissions as well as hours waiting in airport terminals and sitting in planes. These savings were made by members of our project team working remotely from their bases in Chesterfield, Liverpool, Ulm and Zurich instead of flying to the test site in Toulouse to perform the installation in a section of a fuselage, and then visiting a second time to conduct the evaluation tests. For this first remote installation, we were fortunate to have our collaborator from Airbus available to support us [see ‘Most valued player on performs remote installation‘ on December 2nd, 2020]. We are about to stretch our capabilities further by conducting a remote installation and evaluation test during a full-scale aircraft test at the Aerospace Research Centre of the National Research Council Canada in Ottawa, Canada with a team who have never seen the DIMES system and knew nothing about it until about a month ago. I could claim that this remote installation and test will save another couple of tonnes of CO2; but, in practice, we would probably not be performing a demonstration in Canada if we had not developed the remote installation capability.
The DIMES project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 820951. The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.
One of the exciting aspects of leading a university research group is that you can never be quite sure where the research is going next. We published a nice example of this unpredictability last week in Royal Society Open Science in a paper called ‘Transformation of measurement uncertainties into low-dimensional feature space‘ . While the title is an accurate description of the contents, it does not give much away and certainly does not reveal that we proposed a new method for assessing the occurrence of El Niño events. For some time we have been working with massive datasets of measurements from arrays of sensors and representing them by fitting polynomials in a process known as image decomposition [see ‘Recognising strain‘ on October 28th, 2015]. The relatively small number of coefficients from these polynomials can be collated into a feature vector which facilitates comparison with other datasets [see for example, ‘Out of the valley of death into a hype cycle‘ on February 24th, 2021]. Our recent paper provides a solution to the issue of representing the measurement uncertainty in the same space as the feature vector which is roughly what we set out to do. We demonstrated our new method for representing the measurement uncertainty by calibrating and validating a computational model of a simple beam in bending using data from an earlier study in a EU-funded project called VANESSA  — so no surprises there. However, then my co-author and PhD student, Antonis Alexiadis went looking for other interesting datasets with which to demonstrate the new method. He found a set of spatially-varying uncertainties associated with a metamodel of soil moisture in a river basin in China  and global oceanographic temperature fields collected monthly over 11 years from 2002 to 2012 . We used the latter set of data to develop a new technique for assessing the occurrence of El-Niño events in the Pacific Ocean. Our technique is based on global ocean dynamics rather than on the small region in the Pacific Ocean which is usually used and has the added advantages of providing a confidence level on the assessment as well as enabling straightforward comparisons of predictions and measurements. The comparison of predictions and measurements is a recurring theme in our current research but I did not expect it to lead into ocean dynamics.
Image is Figure 11 from  showing convex hulls fitted to the cloud of points representing the uncertainty intervals for the ocean temperature measurements for each month in 2002 using only the three most significant principal components . The lack of overlap between hulls can be interpreted as implying a significant difference in the temperature between months.
 Alexiadis, A. and Ferson, S. and Patterson, E.A., , 2021. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society Open Science, 8(3): 201086.
 Lampeas G, Pasialis V, Lin X, Patterson EA. 2015. On the validation of solid mechanics models using optical measurements and data decomposition. Simulation Modelling Practice and Theory 52, 92-107.
 Kang J, Jin R, Li X, Zhang Y. 2017, Block Kriging with measurement errors: a case study of the spatial prediction of soil moisture in the middle reaches of Heihe River Basin. IEEE Geoscience and Remote Sensing Letters, 14, 87-91.
 Gaillard F, Reynaud T, Thierry V, Kolodziejczyk N, von Schuckmann K. 2016. In situ-based reanalysis of the global ocean temperature and salinity with ISAS: variability of the heat content and steric height. J. Climate. 29, 1305-1323.