Tag Archives: science

Boltzmann’s brain

Ludwig Boltzmann developed a statistical explanation of the second law of thermodynamics by defining entropy as being proportional to the logarithm of the number ways in which we can arrange a system [see ‘Entropy on the brain‘ on November 29th 2017].  The mathematical expression of this definition is engraved on his head-stone.  The second law states that the entropy of the universe is always increasing and Boltzmann argued it implies that the universe must have been created in a very low entropy state.  Four decades earlier, in 1854, William Thomson concluded the dissipation of heat arising from the second law would lead to the ‘death’ of the universe [see ‘Cosmic heat death‘ on February 18th, 2015] while the big bang theory for the creation of the universe evolved about twenty years after Boltzmann’s death.  The probability of a very low entropy state required to bring the universe into existance is very small because it implies random fluctuations in energy and matter leading to a highly ordered state.  One analogy would be the probability of dead leaves floating on the surface of a pond arranging themselves to spell your name.  It is easy to think of fluctuations that are more likely to occur, involving smaller systems, such as one that would bring only our solar system into existence, or progressively more likely, only our planet, only the room in which you are sitting reading this blog, or only your brain.  The last would imply that everything is in your imagination and ultimately that is why Boltzmann’s argument is not widely accepted although we do not have a good explanation for the apparent low entropy state at the start of the universe.  Jean-Paul Sartre wrote in his book Nausea ‘I exist because I think…and I cannot stop myself from thinking.  At this very moment – it’s frightful – if I exist, it is because I am horrified at existing.’  Perhaps most people would find horrifying the logical extension of Boltzmann’s arguments about the start of the universe to everything only existing in our mind.  Boltzmann’s work on statistical mechanics and the second law of thermodynamics is widely accepted and support the case for him being genius; however, his work raised more questions than answers and was widely criticised during his lifetime which led to him taking his own life in 1906.

Sources:

Paul Sen, Einstein’s fridge: the science of fire, ice and the universe.  London: Harper Collins, 2021.

Jean-Paul Sartre, Nausea.  London: Penguin Modern Classics, 2000.

If you don’t succeed, try and try again…

Photograph of S-shaped plateYou would not think it was difficult to build a thin flat metallic plate using a digital description of the plate and a Laser Powder Bed Fusion (L-PBF) machine which can build complex components, such as hip prostheses.  But it is.  As we have discovered since we started our research project on the thermoacoustic response of additively manufactured parts (see ‘Slow start to an exciting new project on thermoacoustic response of AM metals‘ on September 9th, 2020).  L-PBF involves using a laser beam to melt selected regions of a thin layer of metal powder spread over a flat bed.  The selected regions represent a cross-section of the desired three-dimensional component and repeating the process for each successive cross-section results in the additive building of the component as each layer solidifies.  And there in those last four words lies the problem because ‘as each layer solidifies’ the temperature distribution between the layers causes different levels of thermal expansion that results in strains being locked into our thin plates.  Our plates are too thin to build with their plane surfaces horizontal or perpendicular to the laser beam so instead we build them with their plane surface parallel to the laser beam, or vertical like a street sign.  In our early attempts, the residual stresses induced by the locked-in strains caused the plate to buckle into an S-shape before it was complete (see image).  We solved this problem by building buttresses at the edges of the plate.  However, when we remove the buttresses and detach the plate from the build platform, it buckles into a dome-shape.  Actually, you can press the centre of the plate and make it snap back and forth noisily.  While we are making progress in understanding the mechanisms at work, we have some way to go before we can confidently produce flat plates using additive manufacturing that we can use in comparisons with our earlier work on the performance of conventionally, or subtractively, manufactured plates subject to the thermoacoustic loading experienced by the skin of a hypersonic vehicle [see ‘Potential dynamic buckling in hypersonic vehicle skin‘ on July 1st 2020) or the containment walls in a fusion reactor.  Sometimes research is painfully slow but no one ever talks about it.  Maybe because we quickly forget the painful parts once we have a successful outcome to brag about. But it is often precisely the painful repetitions of “try and try again” that allow us to reach the bragging stage of a successful outcome.

The research is funded jointly by the National Science Foundation (NSF) in the USA and the Engineering and Physical Sciences Research Council (EPSRC) in the UK (see Grants on the Web).

References

Silva AS, Sebastian CM, Lambros J and Patterson EA, 2019. High temperature modal analysis of a non-uniformly heated rectangular plate: Experiments and simulations. J. Sound & Vibration, 443, pp.397-410.

Magana-Carranza R, Sutcliffe CJ, Patterson EA, 2021, The effect of processing parameters and material properties on residual forces induced in Laser Powder Bed Fusion (L-PBF). Additive Manufacturing. 46:102192

Somethings will always be unknown

Decorative image of a fruit fly nervous system Albert Cardona HHMI Janelia Research Campus Welcome Image Awards 2015The philosophy of science has oscillated between believing that everything is knowable and that somethings will always be unknowable. In 1872, the German physiologist, Emil du Bois-Reymond declared ‘we do not know and will not know’ implying that there would always be limits to our scientific knowledge. Thirty years later, David Hilbert, a German mathematician stated that nothing is unknowable in the natural sciences. He believed that by considering some things to be unknowable we limited our ability to know. However, Kurt Godel, a Viennese mathematician who moved to Princeton in 1940, demonstrated in his incompleteness theorems that for any finite mathematical system there will always be statements which are true but unprovable and that a finite mathematical system cannot demonstrate its own consistency. I think that this implies some things will remain unknowable or at least uncertain. Godel believed that his theorems implied that the power of the human mind is infinitely more powerful than any finite machine and Roger Penrose has deployed these incompleteness theorems to argue that consciousness transcends the formal logic of computers, which perhaps implies that artificial intelligence will never replace human intelligence [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  At a more mundane level, Godel’s theorems imply that engineers will always have to deal with the unknowable when using mathematical models to predict the behaviour of complex systems and, of course, to avoid meta-ignorance, we have to assume that there are always unknown unknowns [see ‘Deep uncertainty and meta-ignorance‘ on July 21st, 2021].

Source: Book review by Nick Stephen, ‘Journey to the Edge of Reason by Stephen Budiansky – ruthless logic‘ FT Weekend, 1st June 2021.

Deep uncertainty and meta-ignorance

Decorative imageThe term ‘unknown unknowns’ was made famous by Donald Rumsfeld almost 20 years ago when, as US Secretary of State for Defense, he used it in describing the lack of evidence about terrorist groups being supplied with weapons of mass destruction by the Iraqi government. However, the term was probably coined by almost 50 years earlier by Joseph Luft and Harrington Ingham when they developed the Johari window as a heuristic tool to help people to better understand their relationships.  In engineering, and other fields in which predictive models are important tools, it is used to describe situations about which there is deep uncertainty.  Deep uncertainty refers situations where experts do not know or cannot agree about what models to use, how to describe the uncertainties present, or how to interpret the outcomes from predictive models.  Rumsfeld talked about known knowns, known unknowns, and unknown unknowns; and an alternative simpler but perhaps less catchy classification is ‘The knowns, the unknown, and the unknowable‘ which was used by Diebold, Doherty and Herring as part of the title of their book on financial risk management.  David Spiegelhalter suggests ‘risk, uncertainty and ignorance’ before providing a more sophisticated classification: aleatory uncertainty, epistemic uncertainty and ontological uncertainty.  Aleatory uncertainty is the inevitable unpredictability of the future that can be fully described using probability.  Epistemic uncertainty is a lack of knowledge about the structure and parameters of models used to predict the future.  While ontological uncertainty is a complete lack of knowledge and understanding about the entire modelling process, i.e. deep uncertainty.  When it is not recognised that ontological uncertainty is present then we have meta-ignorance which means failing to even consider the possibility of being wrong.  For a number of years, part of my research effort has been focussed on predictive models that are unprincipled and untestable; in other words, they are not built on widely-accepted principles or scientific laws and it is not feasible to conduct physical tests to acquire data to demonstrate their validity [see editorial ‘On the credibility of engineering models and meta-models‘, JSA 50(4):2015].  Some people would say untestability implies a model is not scientific based on Popper’s statement about scientific method requiring a theory to be refutable.  However, in reality unprincipled and untestable models are encountered in a range of fields, including space engineering, fusion energy and toxicology.  We have developed a set of credibility factors that are designed as a heuristic tool to allow the relevance of such models and their predictions to be evaluated systematically [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020].  One outcome is to allow experts to agree on their disagreements and ignorance, i.e., to define the extent of our ontological uncertainty, which is an important step towards making rational decisions about the future when there is deep uncertainty.

References

Diebold FX, Doherty NA, Herring RJ, eds. The Known, the Unknown, and the Unknowable in Financial Risk Management: Measurement and Theory Advancing Practice. Princeton, NJ: Princeton University Press, 2010.

Spiegelhalter D,  Risk and uncertainty communication. Annual Review of Statistics and Its Application, 4, pp.31-60, 2017.

Patterson EA, Whelan MP. On the validation of variable fidelity multi-physics simulations. J. Sound and Vibration. 448:247-58, 2019.

Patterson EA, Whelan MP, Worth AP. The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application. Computational Toxicology. 100144, 2020.