Tag Archives: strain distribution

Jigsaw puzzling without a picture

A350 XWB passes Maximum Wing Bending test

A350 XWB passes Maximum Wing Bending test

Research sometimes feels like putting together a jigsaw puzzle without the picture or being sure you have all of the pieces.  The pieces we are trying to fit together at the moment are (i) image decomposition of strain fields [see ‘Recognising strain’ on October 28th 2015] that allows fields containing millions of data values to be represented by a feature vector with only tens of elements which is useful for comparing maps or fields of predictions from a computational model with measurements made in the real-world; (ii) evaluation of the variation in measurement uncertainty over a field of view of measured displacements or strains in a large structure [see ‘Industrial uncertainty’ on December 12th 2018] which provides information about the quality of the measurements; and (iii) a probabilistic validation metric that provides a measure of how well predictions from a computational model represent measurements made in the real world [see ‘Million to one’ on November 21st 2018].  We have found some of the missing pieces of the jigsaw, for example we have established how to represent the distribution of measurement uncertainty in the feature vector domain [see ‘From strain measurements to assessing El Niño events’ on March 17th 2021] so that it can be used to assess the significance of differences between measurements and predictions represented by their feature vectors – this connects (i) and (ii) together.  Very recently we have demonstrated a generic technique for performing image decomposition of irregularly shaped fields of data or data fields with holes [see Christian et al, 2021] which extends the applicability of our method for comparing measurements and predictions to real-world objects rather than idealised shapes.  This allows (i) to be used in industrial applications but we still have to work out how to connect this to the probabilistic metric in (iii) while also incorporating spatially-varying uncertainty.  These techniques can be used in a wide range of applications, as demonstrated in our recent work on El Niño events [see Alexiadis et al, 2021], because, by treating all fields of data as images, the techniques are agnostic about the source and format of the data.  However, at the moment, our main focus is on their application to ground tests on aircraft structures as part of the Smarter Testing project in collaboration with Airbus, Centre for Modelling & Simulation, Dassault Systèmes, GOM UK Ltd, and the National Physical Laboratory with funding from the Aerospace Technology Institute.  Together we are working towards digital continuity across virtual and physical testing of aircraft structures to provide live data fusion and enable condition-led inspections, test control and validation of computational models.  We anticipate these advances will reduce time and costs for physical tests and accelerate the development of new designs of aircraft that will contribute to global sustainability targets (the aerospace industry has committed to reduce CO2 emissions to 50% of 2005 levels by 2050).  The Smarter Testing project has an ambitious goal which reveals that our pieces of the jigsaw puzzle belong to a small section of a much larger one.

For more on the Smarter Testing project see:

https://www.aerospacetestinginternational.com/news/structural-testing/smarter-testing-research-program-to-link-virtual-and-physical-aerospace-testing.html

https://www.aerospacetestinginternational.com/opinion/how-integrating-the-virtual-and-physical-will-make-aerospace-testing-and-certification-smarter.html

References

Alexiadis A, Ferson S, Patterson EA. Transformation of measurement uncertainties into low-dimensional feature vector space. Royal Society open science. 8(3):201086, 2021.

Christian WJ, Dean AD, Dvurecenska K, Middleton CA, Patterson EA. Comparing full-field data from structural components with complicated geometries. Royal Society open science. 8(9):210916, 2021.

Image: http://www.airbus.com/galleries/photo-gallery

Spatio-temporal damage maps for composite materials

Earlier this year, my group published a new technique for illustrating the development of damage as a function of both space and time in materials during testing in a laboratory.  The information is presented in a damage-time map and shows where and when damage appears in the material.  The maps are based on the concept that damage represents a change in the structure of the material and, hence, produces changes in the load paths or stress distribution in the material.  We can use any of a number of optical techniques to measure strain, which is directly related to stress, across the surface of the material; and then look for changes in the strain distribution in real-time.  Wherever a permanent change is seen to occur there must also be permanent deformation or damage. We use image decomposition techniques that we developed some time ago [see ‘Recognizing strain‘ on October 28th, 2018], to identify the changes. Our damage-time maps remove the need for skilled operators to spend large amounts of time reviewing data and making subjective decisions.  They also allow a large amount of information to be presented in a single image which makes detailed comparisons with computer predictions easier and more readily quantifiable that, in turn, supports the validation of computational models [see ‘Model validation‘ on September 18th, 2012].

The structural integrity of composite materials is an on-going area of research because we only have a limited understanding of these materials.  It is easy to design structures using materials that have a uniform or homogeneous structure and mechanical properties which do not vary with orientation, i.e. isotropic properties.  For simple components, an engineer can predict the stresses and likely failure modes using the laws of physics, a pencil and paper plus perhaps a calculator.  However, when materials contain fibres embedded in a matrix, such as carbon-fibres in an epoxy resin, then the analysis of structural behaviour becomes much more difficult due to the interaction between the fibres and with the matrix.  Of course, these interactions are also what make these composite materials interesting because they allow less material to be used to achieve the same performance as homogeneous isotropic materials.  There are very many ways of arranging fibres in a matrix as well as many different types of fibres and matrix; and, engineers do not understand most of their interactions nor the mechanisms that lead to failure.

The image shows, on the left, the maximum principal strain in a composite specimen loaded longitudinally in tension to just before failure; and, on the right, the corresponding damage-time map indicating when and where damage developing during the tension loading.

Source:

Christian WJR, Dvurecenska K, Amjad K, Pierce J, Przybyla C & Patterson EA, Real-time quantification of damage in structural materials during mechanical testing, Royal Society Open Science, 7:191407, 2020.

How many repeats do we need?

This is a question that both my undergraduate students and a group of taught post-graduates have struggled with this month.  In thermodynamics, my undergraduate students were estimating absolute zero in degrees Celsius using a simple manometer and a digital thermometer (this is an experiment from my MOOC: Energy – Thermodynamics in Everyday Life).  They needed to know how many times to repeat the experiment in order to determine whether their result was significantly different to the theoretical value: -273 degrees Celsius [see my post entitled ‘Arbitrary zero‘ on February 13th, 2013 and ‘Beyond  zero‘ the following week]. Meanwhile, the post-graduate students were measuring the strain distribution in a metal plate with a central hole that was loaded in tension. They needed to know how many times to repeat the experiment to obtain meaningful results that would allow a decision to be made about the validity of their computer simulation of the experiment [see my post entitled ‘Getting smarter‘ on June 21st, 2017].

The simple answer is six repeats are needed if you want 98% confidence in the conclusion and you are happy to accept that the margin of error and the standard deviation of your sample are equal.  The latter implies that error bars of the mean plus and minus one standard deviation are also 98% confidence limits, which is often convenient.  Not surprisingly, only a few undergraduate students figured that out and repeated their experiment six times; and the post-graduates pooled their data to give them a large enough sample size.

The justification for this answer lies in an equation that relates the number in a sample, n to the margin of error, MOE, the standard deviation of the sample, σ, and the shape of the normal distribution described by the z-score or z-statistic, z*: The margin of error, MOE, is the maximum expected difference between the true value of a parameter and the sample estimate of the parameter which is usually the mean of the sample.  While the standard deviation, σ,  describes the difference between the data values in the sample and the mean value of the sample, μ.  If we don’t know one of these quantities then we can simplify the equation by assuming that they are equal; and then n ≥ (z*)².

The z-statistic is the number of standard deviations from the mean that a data value lies, i.e, the distance from the mean in a Normal distribution, as shown in the graphic [for more on the Normal distribution, see my post entitled ‘Uncertainty about Bayesian methods‘ on June 7th, 2017].  We can specify its value so that the interval defined by its positive and negative value contains 98% of the distribution.  The values of z for 90%, 95%, 98% and 99% are shown in the table in the graphic with corresponding values of (z*)², which are equivalent to minimum values of the sample size, n (the number of repeats).

Confidence limits are defined as: but when n = , this simplifies to μ ± σ.  So, with a sample size of six (6 = n   for 98% confidence) we can state with 98% confidence that there is no significant difference between our mean estimate and the theoretical value of absolute zero when that difference is less than the standard deviation of our six estimates.

BTW –  the apparatus for the thermodynamics experiments costs less than £10.  The instruction sheet is available here – it is not quite an Everyday Engineering Example but the experiment is designed to be performed in your kitchen rather than a laboratory.