Tag Archives: simulation

Getting smarter

A350 XWB passes Maximum Wing Bending test [from: http://www.airbus.com/galleries/photo-gallery%5D

Garbage in, garbage out (GIGO) is a perennial problem in computational simulations of engineering structures.  If the description of the geometry of the structure, the material behaviour, the loading conditions or the boundary conditions are incorrect (garbage in), then the simulation generates predictions that are wrong (garbage out), or least an unreliable representation of reality.  It is not easy to describe precisely the geometry, material, loading and environment of a complex structure, such as an aircraft or a powerstation; because, the complete description is either unavailable or too complicated.  Hence, modellers make assumptions about the unknown information and, or to simplify the description.  This means the predictions from the simulation have to be tested against reality in order to establish confidence in them – a process known as model validation [see my post entitled ‘Model validation‘ on September 18th, 2012].

It is good practice to design experiments specifically to generate data for model validation but it is expensive, especially when your structure is a huge passenger aircraft.  So naturally, you would like to extract as much information from each experiment as possible and to perform as few experiments as possible, whilst both ensuring predictions are reliable and providing confidence in them.  In other words, you have to be very smart about designing and conducting the experiments as well as performing the validation process.

Together with researchers at Empa in Zurich, the Industrial Systems Institute of the Athena Research Centre in Athens and Dantec Dynamics in Ulm, I am embarking on a new EU Horizon 2020 project to try and make us smarter about experiments and validation.  The project, known as MOTIVATE [Matrix Optimization for Testing by Interaction of Virtual and Test Environments (Grant Nr. 754660)], is funded through the Clean Sky 2 Joint Undertaking with Airbus acting as our topic manager to guide us towards an outcome that will be applicable in industry.  We held our kick-off meeting in Liverpool last week, which is why it is uppermost in my mind at the moment.  We have 36-months to get smarter on an industrial scale and demonstrate it in a full-scale test on an aircraft structure.  So, some sleepness nights ahead…

Bibliography:

 

ASME V&V 10-2006, Guide for verification & validation in computational solid mechanics, American Society of Mech. Engineers, New York, 2006.

European Committee for Standardisation (CEN), Validation of computational solid mechanics models, CEN Workshop Agreement, CWA 16799:2014 E.

Hack E & Lampeas G (Guest Editors) & Patterson EA (Editor), Special issue on advances in validation of computational mechanics models, J. Strain Analysis, 51 (1), 2016.

http://www.engineeringvalidation.org/

Is the world incomprehensible?

For hundreds of years, philosophers and scientists have encouraged one another to keep their explanations of the natural world as simple as possible.  Ockham’s razor, attributed to the 14th century Franciscan friar, William of Ockham, is a well-established and much-cited philosophical principle that of two possible explanations, the simpler one is more likely to be correct.  More recently, Albert Einstein is supposed to have said: ‘everything should be made as simple as possible, but not simpler’.  I don’t think that William of Ockham and Albert Einstein were arguing that we should keep everything simple; but rather that we should not make scientific explanations more complicated than necessary.  However, do we have a strong preference for focusing on phenomena whose behaviour is sufficiently uncomplex that it can be explained by relatively simple theories and models?  In other words, to quote William Wimsatt, ‘we tend to ignore phenomena whose complexity exceeds the capability of our detection apparatus and explanatory models’.  Most of us find science hard; perhaps, this is not just about the language used by the cognoscenti to describe it [see my post on ‘Why is thermodynamics so hard?‘ on February 11th, 2015]; but, more about the complexity of the world around us.  To think about this level of complexity requires us to assemble and synchronize very large collections of neurons (100 million or more) in our brains, which is the very opposite of the repetitive formation of relatively small assemblies of neurons that Susan Greenfield has argued are associated with activities we find pleasurable [see my post entitled ‘Digital hive mind‘ on November 30th, 2016].  This might imply that thinking about complexity is not pleasurable for most us, or at least requires very significant effort, and that this explains the aesthetic appeal of simplicity.  However, as William Wimsatt has pointed out, ‘simplicity is not reflective of a metaphysical principle of nature’ but a constraint applied by us; and which, if we persist in its application, will render the world incomprehensible to us.

Sources:

William C. Wimsatt, Randomness and perceived randomness in evolutionary biology, Synthese, 43(2):287-329, 1980.

Susan Greenfield, A day in the life of the brain: the neuroscience of consciousness from dawn to dusk, Allen Lane, 2016.

More violent storms

I made a mistake last week by initially publishing two posts.  My apologies for confusing you or tantalising you with the prospect of going bungee jumping and then postponing the trip.  We’ll go bungee jumping next week.  I postponed it because it’s a preview of the new MOOC on ‘Understanding Super Structures‘ that I am writing and there was a delay in publishing the registration page for the MOOC.

When I posted my comment about postponing the bungee jump due to rain, I didn’t realize that, the following day Liverpool would be battered by Storm Doris, with 90 miles per hour winds that closed the Port of Liverpool.  As I sat writing week 4 of the new MOOC, the wind was swirling around our house causing the windows to rattle; and, on the top storey of our narrow but tall house, you could feel the house moving in the gusts of wind.  Across the street, people visiting Liverpool Cathedral were hanging onto the railings as they made their way to the entrance, and the trees were being bent over to an angle that made you think there would be a loud cracking and splintering of wood at any moment.  Fortunately, the storm was short-lived in Liverpool and moved on to wreak havoc inland.  Bungee jumping would have been very hazardous!

The number of violent storms appears to be increasing and the graphic shows the number of storms in the Atlantic basin since 1850.  Although there is a lot of scatter in the data, there is a clear concentration in the last couple of decades of years with fifteen of more named storms, which suggests there has been more energy in the weather systems in recent years.  The primary source of this energy is the temperature of the oceans and atmosphere.  There is a good account of the development of storms cells in Manuel Delanda’s book ‘Philosophy and Simulation: The Emergence of Synthetic Reason‘, see chapter 1 – The Storm in the Computer, which is available via Google Preview.

The increased frequency of high-energy storm systems is a very apparent manifestation of climate change that is having an impact on many people.  Yet, some governments refuse to even consider the possibility that our climate is changing and that they need to lead our society in discussing and planning strategies to mitigate the impacts.  It reminds me of the saying, attributed to Henri Poincare: ‘To doubt everything, or, to believe everything, are two equally convenient solutions; both dispense with the necessity of reflection.’

Can you trust your digital twin?

Author's digital twin?

Author’s digital twin?

There is about a 3% probability that you have a twin. About 32 in 1000 people are one of a pair of twins.  At the moment an even smaller number of us have a digital twin but this is the direction in which computational biomedicine is moving along with other fields.  For instance, soon all aircraft will have digital twins and most new nuclear power plants.  Digital twins are computational representations of individual members of a population, or fleet, in the case of aircraft and power plants.  For an engineering system, its computer-aided design (CAD) is the beginning of its twin, to which information is added from the quality assurance inspections before it leaves the factory and from non-destructive inspections during routine maintenance, as well as data acquired during service operations from health monitoring.  The result is an integrated model and database, which describes the condition and history of the system from conception to the present, that can be used to predict its response to anticipated changes in its environment, its remaining useful life or the impact of proposed modifications to its form and function. It is more challenging to create digital twins of ourselves because we don’t have original design drawings or direct access to the onboard health monitoring system but this is being worked on. However, digital twins are only useful if people believe in the behaviour or performance that they predict and are prepared to make decisions based on the predictions, in other words if the digital twins possess credibility.  Credibility appears to be like beauty because it is in eye of the beholder.  Most modellers believe that their models are both beautiful and credible, after all they are their ‘babies’, but unfortunately modellers are not usually the decision-makers who often have a different frame of reference and set of values.  In my group, one current line of research is to provide metrics and language that will assist in conveying confidence in the reliability of a digital twin to non-expert decision-makers and another is to create methodologies for evaluating the evidence prior to making a decision.  The approach is different depending on the extent to which the underlying models are principled, i.e. based on the laws of science, and can be tested using observations from the real world.  In practice, even with principled, testable models, a digital twin will never be an identical twin and hence there will always be some uncertainty so that decisions remain a matter of judgement based on a sound understanding of the best available evidence – so you are always likely to need advice from a friendly engineer   🙂

Sources:

De Lange, C., 2014, Meet your unborn child – before it’s conceived, New Scientist, 12 April 2014, p.8.

Glaessgen, E.H., & Stargel, D.S., 2012, The digital twin paradigm for future NASA and US Air Force vehicles, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA paper 2012-2018, NF1676L-13293.

Patterson E.A., Feligiotti, M. & Hack, E., 2013, On the integration of validation, quality assurance and non-destructive evaluation, J. Strain Analysis, 48(1):48-59.

Patterson, E.A., Taylor, R.J. & Bankhead, M., 2016, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103.

Patterson EA & Whelan MP, 2016, A framework to establish credibility of computational models in biology, Progress in Biophysics & Molecular Biology, doi: 10.1016/j.pbiomolbio.2016.08.007.

Tuegel, E.J., 2012, The airframe digital twin: some challenges to realization, Proc 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference.