Category Archives: Engineering

Fridges slow down time

Photograph of the interior of a large domestic fridgeWe sense the passage of time by the changes that occur around us (see ‘We inhabit time as fish live in water‘ on July 24th, 2019) and these changes are brought about by processes that generate entropy.   Entropy is often referred to as the arrow of time because forwards in time is always the direction in which the entropy of the universe increases, as demanded by the second law of thermodynamics (see for example ‘Subtle balance of sustainable orderliness‘ on June 22nd, 2016).  The temperature in a refrigerator is sufficiently low that it slows down the processes of decay in the food stored in it (see’ Life-time battle‘ on January 30th, 2013) which effectively slows down time locally in the fridge.  However, there is a price to pay because the process of creating of the cold zone in the fridge increases the entropy in the universe and moves the universe infinitesimally closer to cosmic heat death (see ‘Will it all be over soon?‘ on November 2nd, 2016).  So, cooling the food in your fridge slows down time locally but brings the end of the universe a tiny bit closer.  Perhaps that’s not worth worrying about until you start thinking about how many fridges there are in the world (about half a billion are sold every year) and how many other devices are generating entropy.  The end of the universe might still be billions of years away but all that anthropogenic entropy is contributing to the increase in the temperature of the Earth’s ecosystem.

When you invent the ship, you also invent the shipwreck

I recently came across this quote from Paul Virilio, a French philosopher who lived from 1932 to 2018.  Actually, it is only the first part of a statement he made during an interview with Philippe Petit in 1996.  ‘When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution. Every technology carries its own negativity, which is invented at the same time as technical progress.’  These events have a catastrophic level of negativity; however, there is a more insidious form of negativity induced by every new technology. It arises as a consequence of the second law of thermodynamics which demands that the entropy of the universe increases in all real processes.  In other words, that the degree of disorder in the universe is increased every time we use technology to do something useful, in fact whenever anything happens the second law ensures some negativity.  This implies that the capacity to do something useful, often measured in terms of energy, is decreased not just by doing the useful thing but also by creating disorder.  Technology helps us to do more useful things more quickly; but the downside is that faster processes tend to create more entropy and disorder.  Most of this negativity is not as obvious as a shipwreck or plane crash but instead often takes the form of pollution that eventually and inexorably disrupts the world making it a less hospitable home for us and the rest of nature.  The forthcoming COP26 conference is generating much talk about the need for climate action but very little about the reality that we cannot avoid the demands of the second law and hence need to rethink how, when and what technology we use.

Sources:

Elaine Moore, When Big Dating leaves you standing, FT Weekend, July 8th, 2021.

Paul Virilio, and Petit Philippe. Politics of the Very Worst, New York: Semiotext(e), 1999, p. 89 (available from https://mitpress.mit.edu/books/politics-very-worst).

Somethings will always be unknown

Decorative image of a fruit fly nervous system Albert Cardona HHMI Janelia Research Campus Welcome Image Awards 2015The philosophy of science has oscillated between believing that everything is knowable and that somethings will always be unknowable. In 1872, the German physiologist, Emil du Bois-Reymond declared ‘we do not know and will not know’ implying that there would always be limits to our scientific knowledge. Thirty years later, David Hilbert, a German mathematician stated that nothing is unknowable in the natural sciences. He believed that by considering some things to be unknowable we limited our ability to know. However, Kurt Godel, a Viennese mathematician who moved to Princeton in 1940, demonstrated in his incompleteness theorems that for any finite mathematical system there will always be statements which are true but unprovable and that a finite mathematical system cannot demonstrate its own consistency. I think that this implies some things will remain unknowable or at least uncertain. Godel believed that his theorems implied that the power of the human mind is infinitely more powerful than any finite machine and Roger Penrose has deployed these incompleteness theorems to argue that consciousness transcends the formal logic of computers, which perhaps implies that artificial intelligence will never replace human intelligence [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  At a more mundane level, Godel’s theorems imply that engineers will always have to deal with the unknowable when using mathematical models to predict the behaviour of complex systems and, of course, to avoid meta-ignorance, we have to assume that there are always unknown unknowns [see ‘Deep uncertainty and meta-ignorance‘ on July 21st, 2021].

Source: Book review by Nick Stephen, ‘Journey to the Edge of Reason by Stephen Budiansky – ruthless logic‘ FT Weekend, 1st June 2021.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.