Category Archives: Thermodynamics

Admiral’s comments on fission hold for fusion 70 years later

Last month the US Energy Secretary, Jennifer Granholm announced a successful experiment at the Lawrence Livermore National Laboratory in which 192 lasers were used to pump 2.05 mega Joules of energy into a capsule heating its contents to 100 million degrees Centigrade causing fusion of hydrogen nuclei and the release of 3.15 mega Joules of energy.  An apparent gain of 1.1 mega Joules until you take account of the 300 mega Joules consumed by the 192 lasers.  The reaction in the media to this fusion energy experiment and the difficulties associated with building a practical fusion power plant, such as the Spherical Tokamak Energy Production (STEP) project in the UK (see ‘Celebrating engineering success‘ on November 11th, 2022) reminded me of a well-known memorandum penned by Admiral Rickover in 1953.  Rickover was first tasked, as a Captain, to look at atomic power in May 1946 not long after first human-made self-sustaining nuclear chain reaction was initiated in Chicago Pile #1 during an experiment led by Enrico Fermi in 1942.  He went on to become Admiral Rickover who directed the US Navy’s nuclear propulsion programme and the Nautilus, the first nuclear-powered submarine was launched in 1954.  With thanks to a regular reader of this blog who sent me a copy of the memo and apologies to Admiral Rickover, here is his memorandum edited to apply to fusion energy:

Important decisions about the future of fusion energy must frequently be made by people who do not necessarily have an intimate knowledge of the technical aspects of fusion.  These people are, nonetheless, interested in what a fusion power plant will do, how much it will cost, how long it will take to build and how long and how well it will operate.  When they attempt to learn these things, they become aware the confusion existing in the field of fusion energy.  There appears to be unresolved conflict on almost every issue that arises.

I believe that the confusion stems from a failure to distinguish between the academic and the practical.  These apparent conflicts can usually be explained only when the various aspects of the issue are resolved into their academic and practical components. To aid in this resolution, it is possible to define in a general way those characteristics which distinguish one from the other.

An academic fusion reactor almost always has the following basic characteristics: (1) It is simple. (2) It is small.  (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose . (7) The reactor is in the study phase.  It is not being built now.  On the other hand, a practical fusion reactor can be distinguished by the following characteristics: (1) It is being built now.  (2) It is behind schedule. (3) It is requiring an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of the engineering development problems. (6) It is large. (7) It is complicated.

The tools of the academic-reactor designer are a piece of paper and a pencil with an eraser. If a mistake is made, it can always be erased and changed.  If a mistake is made, it can always be erased and changed.  If the practical-reactor designer errs, they wear the mistake around their neck; it cannot be erased.  Everyone can see it. 

The academic-reactor designer is a dilettante.  They have not had to assume any real responsibility in connection with their projects.  They are free to luxuriate in elegant ideas, the practical shortcomings of which can be relegated to the category of ‘mere technical details’.  The practical-reactor designer must live with these same technical details.  Although recalcitrant and awkward, they must be solved and cannot be put off until tomorrow.  Their solutions require people, time and money.

Unfortunately for those who must make far-reaching decisions without the benefit of an intimate knowledge of fusion technology and unfortunately for the interested public, it is much easier to get the academic side of an issue than the practical side. For the large part those involved with academic fusion reactors have more inclination and time to present their ideas in reports and orally to those who will listen.  Since they are innocently unaware of the real and hidden difficulties of their plans, they speak with great facility and confidence.  Those involved with practical fusion reactors, humbled by their experiences, speak less and worry more.

Yet it is incumbent on those in high places to make wise decisions, and it is reasonable and important that the public be correctly informed.  It is consequently incumbent on all of us to state the facts as forth-rightly as possible.  Although it is probably impossible to have fusion technology ideas labelled as ‘practical’ or ‘academic’ by the authors, it is worthwhile both authors and the audience to bear in mind this distinction and to be guided thereby.

Image: The target chamber of LLNL’s National Ignition Facility, where 192 laser beams delivered more than 2 million joules of ultraviolet energy to a tiny fuel pellet to create fusion ignition on Dec. 5, 2022 from https://www.llnl.gov/news/national-ignition-facility-achieves-fusion-ignition

Storm in a computer

Decorative painting of a stormy seascapeAs part of my undergraduate course on thermodynamics [see ‘Change in focus’ on October 5th, 2022) and in my MOOC on Thermodynamics in Everyday Life [See ‘Engaging learners on-line‘ on May 25th, 2016], I used to ask students to read Chapter 1 ‘The Storm in the Computer’ from Philosophy and Simulation: The Emergence of Synthetic Reason by Manuel Delanda.  It is a mind-stretching read and I recommended that students read it at least twice in order to appreciate its messages.  To support their learning, I provided them with a précis of the chapter that is reproduced below in a slightly modified form.

At the start of the chapter, the simplest emergent properties, such as the temperature and pressure of a body of water in a container, are discussed [see ‘Emergent properties’ on September 16th, 2015].  These properties are described as emergent because they are not the property of a single component of the system, that is individual water molecules but are features of the system as a whole.  They arise from an objective averaging process for the billions of molecules of water in the container.  The discussion is extended to two bodies of water, one hot and one cold brought into contact within one another.  An average temperature will emerge with a redistribution of molecules to create a less ordered state.  The spontaneous flow of energy, as temperature differences cancel themselves, is identified as an important driver or capability, especially when the hot body is continually refreshed by a fire, for instance.  Engineers harness energy gradients or differences and the resultant energy flow to do useful work, for instance in turbines.

However, Delanda does not deviate to discuss how engineers exploit energy gradients.  Instead he identifies the spontaneous flow of molecules, as they self-organise across an energy gradient, as the driver of circulatory flows in the oceans and atmosphere, known as convection cells.  Five to eight convections cells can merge in the atmosphere to form a thunderstorm.  In thunderstorms, when the rising water vapour becomes rain, the phase transition from vapour to liquid releases latent heat or energy that helps sustain the storm system.  At the same time, gradients in electrical charge between the upper and lower sections of the storm generate lightening.

Delanda highlights that emergent properties can be established by elucidating the mechanisms that produce them at one scale and these emergent properties can become the components of a phenomenon at a much larger scale.  This allows scientists and engineers to construct models that take for granted the existence of emergent properties at one scale to explain behaviour at another, which is called ‘mechanism-independence’.  For example, it is unnecessary to model molecular movement to predict heat transfer.  These ideas allow simulations to replicate behaviour at the system level without the need for high-fidelity representations at all scales.  The art of modelling is the ability to decide what changes do, and what changes do not, make a difference, i.e., what to include and exclude.

Source:

Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011.

Image: Painting by Sarah Evans owned by the author.

Delaying cataclysmic events might hasten their advent

detail tl from abstract painting by Zahrah RIn thermodynamics, students are taught to draw a boundary around the system they want to analyse and to decide whether the boundary is open or closed to transfers of mass and energy based on the scenario they want to model.  The next step is to balance the energy flows across the boundary with the change in the energy content of the system.  This is an application of the first law of thermodynamics which is that energy can neither be created nor destroyed.  Rudolf Clausius is credited with discovering entropy when he realised that when energy flowed as heat across a system boundary it became entropy or disordered energy. For instance, when a steam engine does work and discharges heat to the environment. The second law of thermodynamics states that entropy of the universe increases in all real processes.  Thermodynamicists are not the only people who draw boundaries and decide whether they are open or closed.  Politicians and generals draw national boundaries occasionally and more frequently decide whether they are open or closed to people, goods and capital.  After the first world war economists, such as Friedrich Hayek and Ludwig von Mises, proposed that conflict would be less likely if people, goods and capital could flow freely across national boundaries.  These ideas became the principles on which the IMF and World Bank were formed at Bretton Woods in July 1944 in the closing stages of the second world war.  Presidents of the USA, since Ronald Reagan, have taken these ideas a step further by unleashing capitalism through deregulation of markets in the belief that markets know best.  However, ever-growing capital generates an ever-increasing rate of creation of entropy and disorder in the world [see ‘Existential connection between capitalism and entropy‘ on May 4th 2022] and perhaps attempting to reduce conflict by unfettering capital actually accelerates the descent into chaos and disorder because entropy increases in every transaction.

Sources:

Rana Foroohar, When the market fails us, FT Weekend, 23 April/24 April 2022.

Gary Gerstle, The Rise and Fall of the Neoliberal Order: America and the World in Free Market Era, Oxford: OUP, 2022.

The cataclysmic events referred to in the title are those identified by Thomas Piketty as being the only means by which economic inequality is reduced, i.e., wars and revolutions [see ‘Existential connection between capitalism and entropy‘ on May 4th 2022].  The title was inspired by correspondence from Bob Handscombe with whom I wrote a book entitled ‘The Entropy Vector: Connecting Science and Business‘.

Existential connection between capitalism and entropy

global average temperature with timeAccording to Raj Patel and Jason W Moore, in his treatise ‘Das Kapital’ Karl Marx defined capitalism as combining labour power, machines and raw materials to produce commodities that are sold for profit which is re-invested in yet more labour power, machines and raw materials.  In other words, capitalism involves processes that produce profit from an economic perspective, and from a thermodynamic perspective produce entropy because the second law of thermodynamics demands that all real processes produce entropy.  Thermodynamically, entropy usually takes the form of heat dissipated into the environment which raises the temperature of the environment; however, it can also be interpreted as an increase in the disorder of a system [see ‘Will it all be over soon?’ on November 2nd, 2016].  The ever-expanding cycle of profit being turned into capital implies that the processes of producing commodities must also become ever larger.  The ever-expanding processes of production implies that the rate of generation of entropy also increases with time.  If no profit were reinvested in economic processes then the processes would still increase the entropy in the universe but when profit is re-invested and expands the economic processes then the rate of entropy production increases and the entropy in the universe increases exponentially – that’s why the graphs of atmospheric temperature curve upwards at an increasing rate since the industrial revolution.  As if that is not bad enough, the French social economist, Thomas Piketty has proposed that the rate of return on capital, “r” is always greater than the rate of growth of the economy, “g” in his famous formula “r>g”.  Hence, even with zero economic growth, the rate of return will be above zero and the level of entropy will rise exponentially.  Piketty identified inequality as a principal effect of his formula and suggested that only cataclysmic events, such as world wars or revolutions, can reduce inequality.  The pessimistic thermodynamicist in me would conclude that an existential cataclysmic event might be the only way that this story ends.

Sources

Raj Patel & Jason W. Moore, A history of the world in seven cheap things, London: Verso, 2018.

Thomas Piketty, A brief history of equality, translated by Steven Rendall, Harvard: Belknap, 2022.

Diane Coyle, Piketty the positive, FT Weekend, 16 April/17 April 2022.

Image: Global average near surface temperature since the pre-industrial period from www.eea.europa.eu/data-and-maps/figures/global-average-near-surface-temperature