Tag Archives: complexity

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.

References:

[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.

Digital twins could put at risk what it means to be human

Detail from abstract by Zahrah ReshI have written in the past about my research on the development and use of digital twins.  A digital twin is a functional representation in a virtual world of a real world entity that is continually updated with data from the real world [see ‘Fourth industrial revolution’ on July 4th, 2018 and also a short video at https://www.youtube.com/watch?v=iVS-AuSjpOQ].  I am working with others on developing an integrated digital nuclear environment from which digital twins of individual power stations could be spawned in parallel with the manufacture of their physical counterparts [see ‘Enabling or disruptive technology for nuclear engineering’ on January 1st, 2015 and ‘Digitally-enabled regulatory environment for fusion power-plants’ on March 20th, 2019].  A couple of months ago, I wrote about the difficulty of capturing tacit knowledge in digital twins, which is knowledge that is generally not expressed but is retained in the minds of experts and is often essential to developing and operating complex engineering systems [see ‘Tacit hurdle to digital twins’ on August 26th, 2020].  The concept of tapping into someone’s mind to extract tacit knowledge brings us close to thinking about human digital twins which so far have been restricted to computational models of various parts of human anatomy and physiology.  The idea of a digital twin of someone’s mind raises a myriad of philosophical and ethical issues.  Whilst the purpose of a digital twin of the mind of an operator of a complex system might be to better predict and understand human-machine interactions, the opportunity to use the digital twin to advance techniques of personalisation will likely be too tempting to ignore.  Personalisation is the tailoring of the digital world to respond to our personal needs, for instance using predictive algorithms to recommend what book you should read next or to suggest purchases to you.  At the moment, personalisation is driven by data derived from the tracks you make in the digital world as you surf the internet, watch videos and make purchases.  However, in the future, those predictive algorithms could be based on reading your mind, or at least its digital twin.  We worry about loss of privacy at the moment, by which we probably mean the collation of vast amounts of data about our lives by unaccountable organisations, and it worries us because of the potential for manipulation of our lives without us being aware it is happening.  Our free will is endangered by such manipulation but it might be lost entirely to a digital twin of our mind.  To quote the philosopher Michael Lynch, you would be handing over ‘privileged access to your mental states’ and to some extent you would no longer be a unique being.  We are long way from possessing the technology to realise a digital twin of human mind but the possibility is on the horizon.

Source: Richard Waters, They’re watching you, FT Weekend, 24/25 October 2020.

Image: Extract from abstract by Zahrah Resh.

Slow deep thoughts from a planet-sized brain

I overheard a clip on the radio last week in which someone was parodying the quote from Marvin, the Paranoid Android in the Hitchhiker’s Guide to the Galaxy: ‘Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.’  It set me thinking about something that I read a few months ago in Max Tegmark’s book: ‘Life 3.0 – being human in the age of artificial intelligence‘ [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  Tegmark speculates that since consciousness seems to require different parts of a system to communicate with one another and form networks or neuronal assemblies [see ‘Digital hive mind‘ on November 30th, 2016], then the thoughts of large systems will be slower by necessity.  Hence, the process of forming thoughts in a planet-sized brain will take much longer than in a normal-sized human brain.  However, the more complex assemblies that are achievable with a planet-sized brain might imply that the thoughts and experiences would be much more sophisticated, if few and far between.  Tegmark suggests that a cosmic mind with physical dimensions of a billion light-years would only have time for about ten thoughts before dark energy fragmented it into disconnected parts; however, these thoughts and associated experiences would be quite deep.

Sources:

Douglas Adams, The Hitchhiker’s Guide to the Galaxy, Penguin Random House, 2007.

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

 

More laws of biology

Four years ago I wrote a post asking whether there were any fundamental laws of biology that are sufficiently general to apply beyond the context of life on Earth [‘Laws of biology?‘ on January 16th, 2016].  I suggested Dollo’s law that diversity and complexity increases in evolutionary systems; the Hardy-Weinberg law about allele and genotype frequencies remaining constant from generation to generation; and the Michaelis-Menten law governing enzymatic reactions.  Recently, I came across a simpler statement of the laws of biology proposed by Edward O.Wilson.  He states that the first law of biology is all entities and processes of life are obedient to the laws of physics and chemistry; and the second law is all evolution, beyond minor random perturbations due to high mutation rates and random fluctuations in the number of competing genes, is due to natural selection.  It seems likely that these simpler laws will be universally applicable; however, until we find evidence of extra-terrestrial life, they will remain untestable in a universal context unlike the laws of physics.

Source:

Edward O. Wilson, Letters to a Young Scientist, Liveright Pub. Co., NY, 2013.