Tag Archives: aerospace

The blind leading the blind

Three years after it started, the MOTIVATE project has come to an end [see ‘Getting smarter’ on June 21st, 2017].  The focus of the project has been about improving the quality of validation for predictions of structural behaviour in aircraft using fewer, better physical tests.  We have developed an enhanced flowchart for model validation [see ‘Spontaneously MOTIVATEd’ on June 27th, 2018], a method for quantifying uncertainty in measurements of deformation in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] and a toolbox for quantifying the extent to which predictions from computational models represent measurements made in the real-world [see ‘Alleviating industrial uncertainty’ on May 13th, 2020].  In the last phase of the project, we demonstrated all of these innovations on the fuselage nose section of an aircraft.  The region of interest was the fuselage skin behind the cockpit window for which the out-of-plane displacements resulting from an internal pressurisation load were predicted using a finite element model [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  The computational model was provided by Airbus and is shown on the left in the top graphic with the predictions for the region of interest on the right.  We used a stereoscopic imaging system  to record images of a speckle pattern on the fuselage before and after pressurization; and from these images, we evaluated the out-of-plane displacements using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC].  The bottom graphic shows the measurements being made with assistance from an Airbus contractor, Strain Solutions Limited.  We compared the predictions quantitatively against the measurements in a double-blind process which meant that the modellers and experimenters had no access to one another’s results.  The predictions were made by one MOTIVATE partner, Athena Research Centre; the measurements were made by another partner, Dantec Dynamics GmbH supported by Strain Solutions Limited; and the quantitative comparison was made by the project coordinator, the University of Liverpool.  We found that the level of agreement between the predictions and measurements changed with the level of pressurisation; however, the main outcome was the demonstration that it was possible to perform a double-blind validation process to quantify the extent to which the predictions represented the real-world behaviour for a full-scale aerospace structure.

The content of this post is taken from a paper that was to be given at a conference later this summer; however, the conference has been postponed due to the pandemic.  The details of the paper are: Patterson EA, Diamantakos I, Dvurecenska K, Greene RJ, Hack E, Lampeas G, Lomnitz M & Siebert T, Application of a model validation protocol to an aircraft cockpit panel, submitted to the International Conference on Advances in Experimental Mechanics to be held in Oxford in September 2021.  I would like to thank the authors for permission to write about the results in this post and Linden Harris of Airbus SAS for enabling the study and to him and Eszter Szigeti for providing technical advice.

For more on the validation flowchart see: Hack E, Burguete R, Dvurecenska K, Lampeas G, Patterson E, Siebert T & Szigeti, Steps towards industrial validation experiments, In Multidisciplinary Digital Publishing Institute Proceedings (Vol. 2, No. 8, p. 391) https://www.mdpi.com/2504-3900/2/8/391

For more posts on the MOTIVATE project: https://realizeengineering.blog/category/myresearch/motivate-project/

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Alleviating industrial uncertainty

Want to know how to assess the quality of predictions of structural deformation from a computational model and how to diagnose the causes of differences between measurements and predictions?  The MOTIVATE project has the answers; that might seem like an over-assertive claim but read on and make your own judgment.  Eighteen months ago, I reported on a new method for quantifying the uncertainty present in measurements of deformation made in an industrial environment [see ‘Industrial uncertainty’ on December 12th, 2018] that we were trialling on a 1 m square panel of an aircraft fuselage.  Recently, we have used the measurement uncertainty we found to make judgments about the quality of predictions from computer models of the panel under compressive loading.  The top graphic shows the outside surface of the panel (left) with a speckle pattern to allow measurements of its deformation using digital image correlation (DIC) [see ‘256 shades of grey‘ on January 22, 2014 for a brief explanation of DIC]; and the inside surface (right) with stringers and ribs.  The bottom graphic shows our results for two load cases: a 50 kN compression (top row) and a 50 kN compression and 1 degree of torsion (bottom row).  The left column shows the out-of-plane deformation measured using a stereoscopic DIC system and the middle row shows the corresponding predictions from a computational model using finite element analysis [see ‘Did cubism inspire engineering analysis?’ on January 25th, 2017].  We have described these deformation fields in a reduced form using feature vectors by applying image decomposition [see ‘Recognizing strain’ on October 28th, 2015 for a brief explanation of image decomposition].  The elements of the feature vectors are known as shape descriptors and corresponding pairs of them, from the measurements and predictions, are plotted in the graphs on the right in the bottom graphic for each load case.  If the predictions were in perfect agreement with measurements then all of the points on these graphs would lie on the line equality [y=x] which is the solid line on each graph.  However, perfect agreement is unobtainable because there will always be uncertainty present; so, the question arises, how much deviation from the solid line is acceptable?  One answer is that the deviation should be less than the uncertainty present in the measurements that we evaluated with our new method and is shown by the dashed lines.  Hence, when all of the points fall inside the dashed lines then the predictions are at least as good as the measurements.  If some points lie outside of the dashed lines, then we can look at the form of the corresponding shape descriptors to start diagnosing why we have significant differences between our model and experiment.  The forms of these outlying shape descriptors are shown as insets on the plots.  However, busy, or non-technical decision-makers are often not interested in this level of detailed analysis and instead just want to know how good the predictions are.  To answer this question, we have implemented a validation metric (VM) that we developed [see ‘Million to one’ on November 21st, 2018] which allows us to state the probability that the predictions and measurements are from the same population given the known uncertainty in the measurements – these probabilities are shown in the black boxes superimposed on the graphs.

These novel methods create a toolbox for alleviating uncertainty about predictions of structural behaviour in industrial contexts.  Please get in touch if you want more information in order to test these tools yourself.

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

When seeing nothing is a success

In November I went to Zurich twice: once for the workshop that I wrote about last week [see ‘Fake facts and untrustworthy predictions’ on December 4th, 2019]; and, a second time for a progress meeting of the DIMES project [see ‘Finding DIMES’ on February 6th, 2019].  The progress meeting went well.  The project is on schedule and within budget. So, everyone is happy and you are wondering why I am writing about it.  It was what our team was doing around the progress meeting that was exciting.  A few months ago, Airbus delivered a section of an A320 wing to the labs of EMPA who are our project partner in Switzerland, and the team at EMPA has been rigging the wing section for a simple bending test so that we can use it to test the integrated measurement system which we are developing in the DIMES project [see ‘Joining the dots’ on July 10th, 2019].  Before and after the meeting, partners from EMPA, Dantec Dynamics GmbH, Strain Solutions Ltd and my group at the University of Liverpool were installing our prototype systems to monitor the condition of the wing when we apply bending loads to it.  There is some pre-existing damage in the wing that we hope will propagate during the test allowing us to track it with our prototype systems using visible and infra-red spectrum cameras as well as electrical and optical sensors.  The data that we collect during the test will allow us to develop our data processing algorithms and, if necessary, refine the system design.  The final stage of the DIMES project will involve installing a series of our systems in a complete wing undergoing a structural test in the new Airbus Wing Integration Centre (AWIC) in Filton, near Bristol in the UK.  The schedule is ambitious because we will need to install the sensors for our systems in the wing in the first quarter of next year, probably before we have finished all of the tests in EMPA.  However, the test in Bristol probably will not start until the middle of 2020, by which time we will have refined our algorithm for data processing and be ready for the deluge of data that we are likely to receive from the test at Airbus.  The difference between the two wing tests besides the level of maturity of our measurement system, is that no damage should be detected in the wing at Airbus whereas there will be detectable damage in the wing section in EMPA.  So, a positive result will be a success at EMPA but a negative result, i.e. no damage detected, will be a success at Airbus.

The DIMES project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 820951.

 

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Fake facts & untrustworthy predictions

I need to confess to writing a misleading post some months ago entitled ‘In Einstein’s footprints?‘ on February 27th 2019, in which I promoted our 4th workshop on the ‘Validation of Computational Mechanics Models‘ that we held last month at Guild Hall of Carpenters [Zunfthaus zur Zimmerleuten] in Zurich.  I implied that speakers at the workshop would be stepping in Einstein’s footprints when they presented their research at the workshop, because Einstein presented a paper at the same venue in 1910.  However, as our host in Zurich revealed in his introductory remarks , the Guild Hall was gutted by fire in 2007 and so we were meeting in a fake, or replica, which was so good that most of us had not realised.  This was quite appropriate because a theme of the workshop was enhancing the credibility of computer models that are used to replicate the real-world.  We discussed the issues surrounding the trustworthiness of models in a wide range of fields including aerospace engineering, biomechanics, nuclear power and toxicology.  Many of the presentations are available on the website of the EU project MOTIVATE which organised and sponsored the workshop as part of its dissemination programme.  While we did not solve any problems, we did broaden people’s understanding of the issues associated with trustworthiness of predictions and identified the need to develop common approaches to support regulatory decisions across a range of industrial sectors – that’s probably the theme for our 5th workshop!

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Image: https://www.tagesanzeiger.ch/Zunfthaus-Zur-Zimmerleuten-Wiederaufbauprojekt-steht/story/30815219