Tag Archives: emergent properties

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.

References:

[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.

Red to blue

Some research has a very long incubation time.  Last month, we published a short paper that describes the initial results of research that started just after I arrived in Liverpool in 2011.  There are various reasons for our slow progress, including our caution about the validity of the original idea and the challenges of working across discipline boundaries.  However, we were induced to rush to publication by the realization that others were catching up with us [see blog post and conference paper].  Our title does not give much away: ‘Characterisation of metal fatigue by optical second harmonic generation‘.

Second harmonic generation or frequency doubling occurs when photons interact with a non-linear material and are combined to produce new photons with twice the energy, and hence, twice the frequency and half the wavelength of the original photons.  Photons are discrete packets of energy that, in our case, are supplied in pulses of 2 picoseconds from a laser operating at a wavelength of 800 nanometres (nm).  The photons strike the surface, are reflected, and then collected in a spectrograph to allow us to evaluate the wavelength of the reflected photons.  We look for ones at 400 nm, i.e. a shift from red to blue.

The key finding of our research is that the second harmonic generation from material in the plastic zone ahead of a propagating fatigue crack is different to virgin material that has experienced no plastic deformation.  This is significant because the shape and size of the crack tip plastic zone determines the rate and direction of crack propagation; so, information about the plastic zone can be used to predict the life of a component.  At first sight, this capability appears similar to thermoelastic stress analysis that I have described in Instructive Update on October 4th, 2017; however, the significant potential advantage of second harmonic generation is that the component does not have to be subject to a cyclic load during the measurement, which implies we could study behaviour during a load cycle as well as conduct forensic investigations.  We have some work to do to realise this potential including developing an instrument for routine measurements in an engineering laboratory, rather than an optics lab.

Last week, I promised weekly links to posts on relevant Thermodynamics topics for students following my undergraduate module; so here are three: ‘Emergent properties‘, ‘Problem-solving in Thermodynamics‘, and ‘Running away from tigers‘.

 

Emergent properties

storm over canyonPerhaps my strongest memory of being taught at school is that of the head of chemistry combining hydrogen and oxygen using an old glass drinks bottle and a burning taper.  The result was explosive, exciting and memorable.  It certainly engaged the attention of everyone in the class.  As far as I am aware, the demonstration was performed at least once per year for decades; but modern health and safety regulations would probably prevent such a demonstration today.

One of the interesting things about combining these two gases at room temperature is that the result is a liquid: water.  This could be construed as an emergent property because an examination of the properties of water would not lead you to predict that it was formed from two gases.  The philosopher C.D. Broad (1887-1971) coined the term ’emergent properties’ for those properties that emerge at a certain level of complexity but do not exist at lower levels.

Perhaps a better example of emergent properties is the pressure and temperature of steam.  We know that water molecules in a cloud of steam are whizzing around randomly,bouncing into one another and the walls of the container – this is the kinetic theory of gases.  If we add energy to the steam, for instance by heating it, then the molecules will gain kinetic energy and move around more quickly.  The properties of pressure and temperature emerge when we zoom out from the molecules and consider the system of the steam in a container.  The temperature of the steam is a measure of the average kinetic energy of the molecules and the pressure is the average force with which the molecules hit the walls of the container.

Manuel Delanda takes these ideas further in a brilliant description of modelling a thunderstorm in his book Philosophy and Simulation: The Emergence of Synthetic Reason.  There are no equations and it is written for the layman so don’t be put off by the title.  He explains that emergent properties can be established by elucidating the mechanisms that produce them at one scale and then these emergent properties become the components of a phenomenon at a much larger scale. This allows engineers to construct models that take for granted the existence of emergent properties at one scale to explain behaviour at another, so for example we don’t need to model molecular movement to predict heat transfer. This is termed ‘mechanism-independence’.

Ok, that’s deep enough for one post!  Except to mention that Capri & Luisi have proposed that life is an emergent property that is not present in the constituent parts of living things and which only appears when the parts are assembled.  Of course, it also disappears when you disassemble a living system, i.e. dissect it.

Sources:

Chapter 1 ‘The Storm in the Computer’ in Philosophy and Simulation: The Emergence of Synthetic Reason by Manuel Delanda, published by Continuum, London, 2011 (pages 7-21).

Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.