Tag Archives: emergent properties

Storm in a computer

Decorative painting of a stormy seascapeAs part of my undergraduate course on thermodynamics [see ‘Change in focus’ on October 5th, 2022) and in my MOOC on Thermodynamics in Everyday Life [See ‘Engaging learners on-line‘ on May 25th, 2016], I used to ask students to read Chapter 1 ‘The Storm in the Computer’ from Philosophy and Simulation: The Emergence of Synthetic Reason by Manuel Delanda.  It is a mind-stretching read and I recommended that students read it at least twice in order to appreciate its messages.  To support their learning, I provided them with a précis of the chapter that is reproduced below in a slightly modified form.

At the start of the chapter, the simplest emergent properties, such as the temperature and pressure of a body of water in a container, are discussed [see ‘Emergent properties’ on September 16th, 2015].  These properties are described as emergent because they are not the property of a single component of the system, that is individual water molecules but are features of the system as a whole.  They arise from an objective averaging process for the billions of molecules of water in the container.  The discussion is extended to two bodies of water, one hot and one cold brought into contact within one another.  An average temperature will emerge with a redistribution of molecules to create a less ordered state.  The spontaneous flow of energy, as temperature differences cancel themselves, is identified as an important driver or capability, especially when the hot body is continually refreshed by a fire, for instance.  Engineers harness energy gradients or differences and the resultant energy flow to do useful work, for instance in turbines.

However, Delanda does not deviate to discuss how engineers exploit energy gradients.  Instead he identifies the spontaneous flow of molecules, as they self-organise across an energy gradient, as the driver of circulatory flows in the oceans and atmosphere, known as convection cells.  Five to eight convections cells can merge in the atmosphere to form a thunderstorm.  In thunderstorms, when the rising water vapour becomes rain, the phase transition from vapour to liquid releases latent heat or energy that helps sustain the storm system.  At the same time, gradients in electrical charge between the upper and lower sections of the storm generate lightening.

Delanda highlights that emergent properties can be established by elucidating the mechanisms that produce them at one scale and these emergent properties can become the components of a phenomenon at a much larger scale.  This allows scientists and engineers to construct models that take for granted the existence of emergent properties at one scale to explain behaviour at another, which is called ‘mechanism-independence’.  For example, it is unnecessary to model molecular movement to predict heat transfer.  These ideas allow simulations to replicate behaviour at the system level without the need for high-fidelity representations at all scales.  The art of modelling is the ability to decide what changes do, and what changes do not, make a difference, i.e., what to include and exclude.

Source:

Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011.

Image: Painting by Sarah Evans owned by the author.

Blinded by reductionism

I wrote about the weakness of reductionism about 18 months ago [see ‘Reduction in usefulness of reductionism‘ on February 17th, 2021].  Reductionism is the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  The concept is flawed because complex systems exhibit emergent properties [see ‘Emergent properties‘ on September 16th, 2015] that appear at a certain level of complexity but do not exist at lower levels.  Life is an emergent property so when you reduce an organism to its constituent parts, for instance by dissection, you kill it and are unable to observe its normal behaviour.  Reductionism is widespread in Western science and has been blinding us to what is often well-known to aboriginal people, i.e., the interconnectedness of nature.  One example is forest ecosystems that Suzanne Simard, amongst others, has shown are complex synergistic, multi-scale organisations of species. Complexity is only hard for those who have not thought about it – it is obvious to many peoples whose lives are integrated in nature’s ecosystem but it is really difficult for those of us educated in the reductionist tradition.

Reference:

Suzanne Simard, Finding the Mother Tree, Penguin, 2021.

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.

References:

[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.

Red to blue

Some research has a very long incubation time.  Last month, we published a short paper that describes the initial results of research that started just after I arrived in Liverpool in 2011.  There are various reasons for our slow progress, including our caution about the validity of the original idea and the challenges of working across discipline boundaries.  However, we were induced to rush to publication by the realization that others were catching up with us [see blog post and conference paper].  Our title does not give much away: ‘Characterisation of metal fatigue by optical second harmonic generation‘.

Second harmonic generation or frequency doubling occurs when photons interact with a non-linear material and are combined to produce new photons with twice the energy, and hence, twice the frequency and half the wavelength of the original photons.  Photons are discrete packets of energy that, in our case, are supplied in pulses of 2 picoseconds from a laser operating at a wavelength of 800 nanometres (nm).  The photons strike the surface, are reflected, and then collected in a spectrograph to allow us to evaluate the wavelength of the reflected photons.  We look for ones at 400 nm, i.e. a shift from red to blue.

The key finding of our research is that the second harmonic generation from material in the plastic zone ahead of a propagating fatigue crack is different to virgin material that has experienced no plastic deformation.  This is significant because the shape and size of the crack tip plastic zone determines the rate and direction of crack propagation; so, information about the plastic zone can be used to predict the life of a component.  At first sight, this capability appears similar to thermoelastic stress analysis that I have described in Instructive Update on October 4th, 2017; however, the significant potential advantage of second harmonic generation is that the component does not have to be subject to a cyclic load during the measurement, which implies we could study behaviour during a load cycle as well as conduct forensic investigations.  We have some work to do to realise this potential including developing an instrument for routine measurements in an engineering laboratory, rather than an optics lab.

Last week, I promised weekly links to posts on relevant Thermodynamics topics for students following my undergraduate module; so here are three: ‘Emergent properties‘, ‘Problem-solving in Thermodynamics‘, and ‘Running away from tigers‘.