Category Archives: FACTS

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015

Recognizing strain

rlpoYou can step off an express train but you can’t speed up a donkey. This is paraphrased from ‘The Fly Trap’ by Fredrik Sjöberg in the context of our adoption of faster and faster technology and the associated life style. Last week we stepped briefly off the ‘express train’ and lowered our strain levels by going to a concert given by the Royal Liverpool Philharmonic Orchestra, including pieces by Dvorak, Chopin and Tchaikovsky. I am not musical at all and so I am unable to tell you much about the performances or compositions, except to say that I enjoyed the performances as did the rest of the audience to judge from the enthusiastic applause. A good deal of my enjoyment arose from the energy of the orchestra and my ability to recognise the musical themes or acoustic features in the pieces. The previous sentence was not intended as a critic’s perspective on the concert but a tenuous link…

Recognising features is one aspect of my recent research, though in strain data rather than music. Modern digital technology allows us to acquire information-rich data maps with tens of thousands of individual data values arranged in arrays or matrices, in which it can be difficult to spot patterns or features. We treat our strain data as images and use image decomposition to compress a data matrix into a feature vector. The diagram shows the process of image decomposition, in which a colour image is converted to a map of intensity in the image. The intensity values can be stored in a matrix and we can fit sets of polynomials to them by ‘tuning’ the coefficients in the polynomials. The coefficients are gathered together in a feature vector. The original data can be reconstructed from the feature vector if you know the set of polynomials used in the decomposition process, so decomposition is also a form of data compression. It is easier to recognise features in the small number of coefficients than in the original data map, which is why we use the process and why it was developed to allow computers to perform pattern recognition tasks such as facial recognition.

decompositionSources:

Wang W, Mottershead JE, Patki A, Patterson EA, Construction of shape features for the representation of full-field displacement/strain data, Applied Mechanics and Materials, 24-25:365-370, 2010.

Patki, A.S., Patterson, E.A, Decomposing strain maps using Fourier-Zernike shape descriptors, Exptl. Mech., 52(8):1137-1149, 2012.

Nabatchian A., Abdel-Raheem E., and Ahmadi M., 2008, Human face recognition using different moment invariants: a comparative review. Congress on Image and Signal Processing, 661-666.

 

Seeing the invisible

Track of the Brownian motion of a 50 nanometre diameter particle

Track of the Brownian motion of a 50 nanometre diameter particle in a fluid.

Nanoparticles are being used in a myriad of applications including sunscreen creams, sports equipment and even to study the stickiness of snot!  By definition, nanoparticles should have one dimension less than 100 nanometres, which is one thousandth of the thickness of a human hair.  Some nanoparticles are toxic to humans and so scientists are studying the interaction of nanoparticles with human cells.  However, a spherical nanoparticle is smaller than the wavelength length of visible light and so is invisible in a conventional optical microscope used by biologists.  We can view nanoparticles using a scanning electron microscope but the electron beam damages living cells so this is not a good solution.  An alternative is to adjust an optical microscope so that the nanoparticles produce caustics [see post entitled ‘Caustics’ on October 15th, 2014] many times the size of the particle.  These ‘adjustments’ involve closing an aperture to produce a pin-hole source of illumination and introducing a filter that only allows through a narrow band of light wavelengths.  An optical microscope adjusted in this way is called a ‘nanoscope’ and with the addition of a small oscillator on the microscope objective lens can be used to track nanoparticles using the technique described in last week’s post entitled ‘Holes in liquid‘.

The smallest particles that we have managed to observe using this technique were gold particles of diameter 3 nanometres , or about 1o atoms in diameter dispersed in a liquid.

 

Image of 3nm diameter gold particle in a conventional optical microscope (top right), in a nanoscope (bottom right) and composite images in the z-direction of the caustic formed in the nanoscope (left).

Image of 3nm diameter gold particle in a conventional optical microscope (top right), in a nanoscope (bottom right) and composite images in the z-direction of the caustic formed in the nanoscope (left).

Sources:

http://ihcp.jrc.ec.europa.eu/our_activities/nanotechnology/jrc-scientists-develop-a-technique-for-automated-three-dimensional-nanoparticle-tracking-using-a-conventional-microscope

‘Scientists use gold nanoparticles to study the stickiness of snot’ by Rachel Feldman in the Washington Post on October 9th, 2014.

J.-M. Gineste, P. Macko, E.A. Patterson, & M.P. Whelan, Three-dimensional automated nanoparticle tracking using Mie scattering in an optical microscope, Journal of Microscopy, Vol. 243, Pt 2 2011, pp. 172–178

Patterson, E.A., & Whelan, M.P., Optical signatures of small nanoparticles in a conventional microscope, Small, 4(10): 1703-1706, 2008.