Tag Archives: complex systems

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.


[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.

Digital twins could put at risk what it means to be human

Detail from abstract by Zahrah ReshI have written in the past about my research on the development and use of digital twins.  A digital twin is a functional representation in a virtual world of a real world entity that is continually updated with data from the real world [see ‘Fourth industrial revolution’ on July 4th, 2018 and also a short video at https://www.youtube.com/watch?v=iVS-AuSjpOQ].  I am working with others on developing an integrated digital nuclear environment from which digital twins of individual power stations could be spawned in parallel with the manufacture of their physical counterparts [see ‘Enabling or disruptive technology for nuclear engineering’ on January 1st, 2015 and ‘Digitally-enabled regulatory environment for fusion power-plants’ on March 20th, 2019].  A couple of months ago, I wrote about the difficulty of capturing tacit knowledge in digital twins, which is knowledge that is generally not expressed but is retained in the minds of experts and is often essential to developing and operating complex engineering systems [see ‘Tacit hurdle to digital twins’ on August 26th, 2020].  The concept of tapping into someone’s mind to extract tacit knowledge brings us close to thinking about human digital twins which so far have been restricted to computational models of various parts of human anatomy and physiology.  The idea of a digital twin of someone’s mind raises a myriad of philosophical and ethical issues.  Whilst the purpose of a digital twin of the mind of an operator of a complex system might be to better predict and understand human-machine interactions, the opportunity to use the digital twin to advance techniques of personalisation will likely be too tempting to ignore.  Personalisation is the tailoring of the digital world to respond to our personal needs, for instance using predictive algorithms to recommend what book you should read next or to suggest purchases to you.  At the moment, personalisation is driven by data derived from the tracks you make in the digital world as you surf the internet, watch videos and make purchases.  However, in the future, those predictive algorithms could be based on reading your mind, or at least its digital twin.  We worry about loss of privacy at the moment, by which we probably mean the collation of vast amounts of data about our lives by unaccountable organisations, and it worries us because of the potential for manipulation of our lives without us being aware it is happening.  Our free will is endangered by such manipulation but it might be lost entirely to a digital twin of our mind.  To quote the philosopher Michael Lynch, you would be handing over ‘privileged access to your mental states’ and to some extent you would no longer be a unique being.  We are long way from possessing the technology to realise a digital twin of human mind but the possibility is on the horizon.

Source: Richard Waters, They’re watching you, FT Weekend, 24/25 October 2020.

Image: Extract from abstract by Zahrah Resh.

Slow deep thoughts from a planet-sized brain

I overheard a clip on the radio last week in which someone was parodying the quote from Marvin, the Paranoid Android in the Hitchhiker’s Guide to the Galaxy: ‘Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.’  It set me thinking about something that I read a few months ago in Max Tegmark’s book: ‘Life 3.0 – being human in the age of artificial intelligence‘ [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  Tegmark speculates that since consciousness seems to require different parts of a system to communicate with one another and form networks or neuronal assemblies [see ‘Digital hive mind‘ on November 30th, 2016], then the thoughts of large systems will be slower by necessity.  Hence, the process of forming thoughts in a planet-sized brain will take much longer than in a normal-sized human brain.  However, the more complex assemblies that are achievable with a planet-sized brain might imply that the thoughts and experiences would be much more sophisticated, if few and far between.  Tegmark suggests that a cosmic mind with physical dimensions of a billion light-years would only have time for about ten thoughts before dark energy fragmented it into disconnected parts; however, these thoughts and associated experiences would be quite deep.


Douglas Adams, The Hitchhiker’s Guide to the Galaxy, Penguin Random House, 2007.

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.


Destruction of society as a complex system?

Sadly my vacation is finished [see ‘Relieving stress‘ on July 17th, 2019] and I have reconnected to the digital world, including the news media.  Despite the sensational headlines and plenty of rhetoric from politicians, nothing very much appears to have really changed in the world.  Yes, we have a new prime minister in the UK, who has a different agenda to the previous incumbent; however, the impact of actions by politicians on society and the economy seems rather limited unless the action represents a step change and is accompanied by appropriate resources.  In addition, the consequences of such changes are often different to those anticipated by our leaders.  Perhaps, this is because society is a global network with simple operating rules, some of which we know intuitively, and without a central control because governments exert only limited and local control.  It is well-known in the scientific community that large networks, without central control but with simple operating rules, usually exhibit self-organising and non-trivial emergent behaviour. The emergent behaviour of a complex system cannot be predicted from the behaviour of its constituent components or sub-systems, i.e., the whole is more than the sum of its parts.  The mathematical approach to describing such systems is to use non-linear dynamics with solutions lying in phase space.  Modelling complex systems is difficult and interpreting the predictions is challenging; so, it is not surprising that when the actions of government have an impact then the outcomes are often unexpected and unintended.  However, if global society can be considered as a complex system, then it would appear that its self-organising behaviour tends to blunt the effectiveness of many of the actions of government.  This seems be a fortuitous regulatory mechanism that helps maintain the status quo.   In addition, we tend to ignore phenomena whose complexity exceeds our powers of explanation, or we use over-simplified explanations [see ‘Is the world incomprehensible?‘ on March 15th, 2017 and Blind to complexity‘ on December 19th, 2018].  And, politicians are no exception to this tendency; so, they usually legislate based on simple ideology rather than rational consideration of the likely outcomes of change on the complex system we call society. And, this is probably a further regulatory mechanism.

However, all of this is evolving rapidly because a small number of tech companies have created a central control by grabbing the flow of data between us and they are using it to manipulate those simple operating rules.  This appears to be weakening the self-organising and emergent characteristics of society so that the system can be controlled more easily without the influence of its constituent parts, i.e. us.

For a more straightforward explanation listen to Carole Cadwalladr’s TED talk on ‘Facebook’s role in Brexit – and the threat to democracy‘ or if you have more time on your hands then watch the new documentary movie ‘The Great Hack‘.  My thanks to Gillian Tett in the FT last weekend who alerted me to the scale of the issue: ‘Data brokers: from poachers to gamekeepers?