The concept of digital twins is gaining acceptance and our ability to generate them is advancing [see ‘Digital twins that thrive in the real-world’ on June 9th, 2021]. It is conceivable that we will be able to simulate many real-world systems in the not-too-distant future. Perhaps not in my life-time but possibly in this century we will be able to connect these simulations together to create a computer-generated world. This raises the possibility that other forms of life might have already reached this stage of technology development and that we are living in one of their simulations. We cannot know for certain that we are not in a simulation but equally we cannot know for certain that we are in a simulation. If some other life form had reached the stage of being able to simulate the universe then there is a possibility that they would do it for entertainment, so we might exist inside the equivalent of a teenager’s smart phone, or for scientific exploration in which case we might be inside one of thousands of simulations being performed simultaneously in a lab computer to gather statistical evidence on the development of universes. It seems probable that there would be many more simulations performed for scientific research than for entertainment, so if we are in a simulation then it is more likely that the creator of the simulation is a scientist who is uninterested in this particular one in which we exist. Of course, an alternative scenario is that humans become extinct before reaching the stage of being able to simulate the world or the universe. If extinction occurs as a result of our inability to manage the technological advances, which would allow us to simulate the world, then it seems less likely that other life forms would have avoided this fate and so the probability that we are in a simulation should be reduced. You could also question whether other life forms would have the same motivations or desires to create computer simulations of evolutionary history. There are lots of reasons for doubting that we are in a computer simulation but it does not seem possible to be certain about it.
A couple of weeks ago, I wrote about unattainable uncertainty [see ‘Uncertainty is unattainable and near-uncertainty unaffordable’ on May 12th, 2021] and you might have thought that some things are certain, such as that tomorrow will follow today. However, even that’s not certain – it has been estimated that there is a 1 in 300,000 chance in the next one hundred years of an asteroid impact on earth resulting in more than one million fatalities. It might seem like a very small probability that you will not be around tomorrow due to an asteroid impact; however, as Sir David Spiegelhalter has pointed out, if that probability of fatalities related to an industrial installation, then it would be considered an intolerable risk by UK Health and Safety Executive. By the way, if you want a more accurate estimate of the probability that an asteroid impact will prevent you seeing tomorrow then NASA provides information about the next Near Earth Object (NEO) to pass within 10 lunar distances (the distance between the moon and the earth which is 384,000 km) at https://cneos.jpl.nasa.gov/. 121 NEOs came within one lunar distance during the last twelve months of which the largest had a diameter of between 88 m and 200m, which is about the size of an Olympic Stadium, and came within 310,000 km; while the closest came within 8000 km, less than a Earth’s diameter which is 12,742 km, and was between 4.8 m and 11 m in diameter or about the size of two double-decker buses. Spiegelhalter reassures us by telling us that there is no record of anyone, except a cow, being killed by an asteroid whereas tragically the same cannot be said about double decker buses!
Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions? The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment. Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage. Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading. The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs. The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row). The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017]. We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition]. The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case. If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph. However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable? One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines. Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements. If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment. The forms of these outlying shape descriptors are shown as insets on the plots. However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are. To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.
These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts. Please get in touch if you want more information in order to test these tools yourself.
The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.
The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.