Category Archives: FACTS

Alleviating industrial uncertainty

Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions?  The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment.  Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage.  Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading.  The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs.  The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row).  The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition].  The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case.  If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph.  However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable?  One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines.  Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements.  If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment.  The forms of these outlying shape descriptors are shown as insets on the plots.  However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are.  To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.

These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts.  Please get in touch if you want more information in order to test these tools yourself.

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Same problems in a different language

I spent a lot of time on trains last week.  I left Liverpool on Tuesday evening for Bristol and spent Wednesday at Airbus in Filton discussing the implementation of the technologies being developed in the EU Clean Sky 2 projects MOTIVATE and DIMES.  On Wednesday evening I travelled to Bracknell and on Thursday gave a seminar at Syngenta on model credibility in predictive toxicology before heading home to Liverpool.  But, on Friday I was on the train again, to Manchester this time, to listen to a group of my PhD students presenting their projects to their peers in our new Centre for Doctoral Training called Growing skills for Reliable Economic Energy from Nuclear, or GREEN.  The common thread, besides the train journeys, is the Fidelity And Credibility of Testing and Simulation (FACTS).  My research group is working on how we demonstrate the fidelity of predictions from models, how we establish trust in both predictions from computational models and measurements from experiments that are often also ‘models’ of the real world.  The issues are similar whether we are considering the structural performance of aircraft [as on Wednesday], the impact of agro-chemicals [as on Thursday], or the performance of fusion energy and the impact of a geological disposal site [as on Friday] (see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018) .  The scientific and technical communities associated with each application talk a different language, in the sense that they use different technical jargon and acronyms; and they are surprised and interested to discover that similar problems are being tackled by communities that they rarely think about or encounter.

On the trustworthiness of multi-physics models

I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop.  In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured.  A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements.  The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014].  We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019].  Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments.  It is the fourth in a series of workshops held previously in Shanghai, London and Munich.  For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Joining the dots

Six months ago, I wrote about ‘Finding DIMES’ as we kicked off a new EU-funded project to develop an integrated measurement system for identifying and tracking damage in aircraft structures.  We are already a quarter of the way through the project and we have a concept design for a modular measurement system based on commercial off-the-shelf components.  We started from the position of wanting our system to provide answers to four of the five questions that Farrar & Worden [1] posed for structural health monitoring systems in 2007; and, in addition to provide information to answer the fifth question.  The five questions are: Is there damage? Where is the damage? What kind of damage is present? How severe is the damage?  And, how much useful life remains?

During the last six months our problem definition has evolved through discussions with our EU Topic Manager, Airbus, to four objectives, namely: to quantify applied loads; to provide condition-led/predictive maintenance; to find indications of damage in composites of 6mm diameter or greater and in metal to detect cracks longer than 1mm; and to provide a digital solution.  At first glance there may not appear to be much connection between the initial problem definition and the current version; but actually, they are not very far apart although the current version is more specific.  This evolution from the idealised vision to the practical goal is normal in engineering projects.

We plan to use point sensors, such as resistance strain gauges or fibre Bragg gratings, to quantify applied loads and track usage history; while imaging sensors will allow us to measure strain fields that will provide information about the changing condition of the structure using the image decomposition techniques developed in previous EU-funded projects: ADVISE, VANESSA (see ‘Setting standards‘ on January 29th, 2014) and INSTRUCTIVE.  We will use these techniques to identify and track cracks in metals [2]; while for composites, we will apply a technique developed through an EPSRC iCASE award from 2012-16 on ‘Full-field strain-based methods for NDT & structural integrity measurement’ [3].

I gave a short briefing on DIMES to a group of Airbus engineers last month and it was good see some excitement in the room about the direction of the project.  And, it felt good to be highlighting how we are building on earlier investments in research by joining the dots to create a deployable measurement system and delivering the complete picture in terms of information about the condition of the structure.

Image: Infra red photograph of DIMES meeting in Ulm.

References

  1. Farrar & Worden, An introduction to structural health monitoring, Phil. Trans. R Soc A, 365:303-315, 2007
  2. Middleton, C.A., Gaio, A., Greene, R.J. & Patterson, E.A., Towards automated tracking of initiation and propagation of cracks in aluminium alloy coupons using thermoelastic stress analysis, Nondestructive Evaluation, 38:18, 2019.
  3. Christian, W.J.R., DiazDelaO, F.A. & Patterson, E.A., Strain-based damage assessment of accurate residual strength prediction of impacted composite laminates, Composites Structures, 184:1215-1223, 2018.

The INSTRUCTIVE and DIMES projects have received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 685777 and No. 820951 respectively.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.