Tag Archives: power stations

Bringing an end to thermodynamic whoopee

Two weeks ago I used two infographics to illustrate the dominant role of energy use in generating greenhouse gas emissions and the disportionate production of greenhouse gas emission by the rich [see ‘Where we are and what we have‘ on November 24th, 2021].  Energy use is responsible for 73% of global greenhouse gas emissions and 16% of the world’s population are responsible for 38% of global CO2 emissions.  Today’s infographics illustrate the energy flows from source to consumption for the USA (above), UK and Europe (thumbnails below).  In the USA fossil fuels (coal, natural gas and petroleum) are the source of nearly 80% of their energy, in the UK it is a little more than 80% and the chart for Europe is less detailed but the proportion looks similar. COP 26 committed countries to ending ‘support for the international unabated fossil fuel energy sector by the end of 2022’ and recognised ‘investing in unabated fossil-related energy projects increasingly entails both social and economic risks, especially through the form of stranded assets, and has ensuing negative impacts on government revenue, local employment, taxpayers, utility ratepayers and public health.’  However, to reduce our dependency on fossil fuels we need a strategy, a plan of action for a fundamental change in how we power industry, heat our homes and propel our vehicles.  A hydrogen economy requires the production of hydrogen without using fossil fuels, electric cars and electric domestic heating requires our electricity generating capacity to be at least trebled by 2050 in order to hit the net zero target. This scale and speed of  transition to zero-carbon sources is such that it will have to be achieved using an integrated blend of green energy sources, including solar, wind and nuclear energy.  For example, in the UK our current electricity generating capacity is about 76 GW and 1 GW is equivalent to 3.1 million photovoltaic (PV) panels, or 364 utility scale wind turbines [www.energy.gov/eere/articles/how-much-power-1-gigawatt] so trebling capacity from one of these sources alone would imply more than 700 million PV panels, or one wind turbine every square mile.  It is easy to write policies but it is much harder to implement them and make things happen especially when transformational change is required.  We cannot expect things to happen simply because our leaders have signed agreements and made statements.  Now, national plans are required to ween us from our addiction to fossil fuels – it will be difficult but the alternative is that global warming might cause the planet to become uninhabitable for us.  It is time to stop ‘making thermodynamic whoopee with fossil fuels’ to quote Kurt Vonnegut [see ‘And then we discovered thermodynamics‘ on February 3rd, 2016].

 

 

 

 

 

 

 

 

 

Sources:

Kurt Vonnegut, A Man without a Country, New York: Seven Stories Press, 2005.  He wrote ‘we have now all but destroyed this once salubrious planet as a life-support system in fewer than two hundred years, mainly by making thermodynamic whoopee with fossil fuels’.

US Energy flow chart: https://flowcharts.llnl.gov/commodities/energy

EU Energy flow chart: https://ec.europa.eu/eurostat/web/energy/energy-flow-diagrams

UK Energy flow chart: https://www.gov.uk/government/collections/energy-flow-charts#2020

Reduction in usefulness of reductionism

decorative paintingA couple of months ago I wrote about a set of credibility factors for computational models [see ‘Credible predictions for regulatory decision-making‘ on December 9th, 2020] that we designed to inform interactions between researchers, model builders and decision-makers that will establish trust in the predictions from computational models [1].  This is important because computational modelling is becoming ubiquitous in the development of everything from automobiles and power stations to drugs and vaccines which inevitably leads to its use in supporting regulatory applications.  However, there is another motivation underpinning our work which is that the systems being modelled are becoming increasingly complex with the likelihood that they will exhibit emergent behaviour [see ‘Emergent properties‘ on September 16th, 2015] and this makes it increasingly unlikely that a reductionist approach to establishing model credibility will be successful [2].  The reductionist approach to science, which was pioneered by Descartes and Newton, has served science well for hundreds of years and is based on the concept that everything about a complex system can be understood by reducing it to the smallest constituent part.  It is the method of analysis that underpins almost everything you learn as an undergraduate engineer or physicist. However, reductionism loses its power when a system is more than the sum of its parts, i.e., when it exhibits emergent behaviour.  Our approach to establishing model credibility is more holistic than traditional methods.  This seems appropriate when modelling complex systems for which a complete knowledge of the relationships and patterns of behaviour may not be attainable, e.g., when unexpected or unexplainable emergent behaviour occurs [3].  The hegemony of reductionism in science made us nervous about writing about its short-comings four years ago when we first published our ideas about model credibility [2].  So, I was pleased to see a paper published last year [4] that identified five fundamental properties of biology that weaken the power of reductionism, namely (1) biological variation is widespread and persistent, (2) biological systems are relentlessly nonlinear, (3) biological systems contain redundancy, (4) biology consists of multiple systems interacting across different time and spatial scales, and (5) biological properties are emergent.  Many engineered systems possess all five of these fundamental properties – you just to need to look at them from the appropriate perspective, for example, through a microscope to see the variation in microstructure of a mass-produced part.  Hence, in the future, there will need to be an increasing emphasis on holistic approaches and systems thinking in both the education and practices of engineers as well as biologists.

For more on emergence in computational modelling see Manuel Delanda Philosophy and Simulation: The Emergence of Synthetic Reason, Continuum, London, 2011. And, for more systems thinking see Fritjof Capra and Luigi Luisi, The Systems View of Life: A Unifying Vision, Cambridge University Press, 2014.

References:

[1] Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

[2] Patterson EA &Whelan MP, A framework to establish credibility of computational models in biology. Progress in biophysics and molecular biology, 129: 13-19, 2017.

[3] Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

[4] Pruett WA, Clemmer JS & Hester RL, Physiological Modeling and Simulation—Validation, Credibility, and Application. Annual Review of Biomedical Engineering, 22:185-206, 2020.

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Digital twins could put at risk what it means to be human

Detail from abstract by Zahrah ReshI have written in the past about my research on the development and use of digital twins.  A digital twin is a functional representation in a virtual world of a real world entity that is continually updated with data from the real world [see ‘Fourth industrial revolution’ on July 4th, 2018 and also a short video at https://www.youtube.com/watch?v=iVS-AuSjpOQ].  I am working with others on developing an integrated digital nuclear environment from which digital twins of individual power stations could be spawned in parallel with the manufacture of their physical counterparts [see ‘Enabling or disruptive technology for nuclear engineering’ on January 1st, 2015 and ‘Digitally-enabled regulatory environment for fusion power-plants’ on March 20th, 2019].  A couple of months ago, I wrote about the difficulty of capturing tacit knowledge in digital twins, which is knowledge that is generally not expressed but is retained in the minds of experts and is often essential to developing and operating complex engineering systems [see ‘Tacit hurdle to digital twins’ on August 26th, 2020].  The concept of tapping into someone’s mind to extract tacit knowledge brings us close to thinking about human digital twins which so far have been restricted to computational models of various parts of human anatomy and physiology.  The idea of a digital twin of someone’s mind raises a myriad of philosophical and ethical issues.  Whilst the purpose of a digital twin of the mind of an operator of a complex system might be to better predict and understand human-machine interactions, the opportunity to use the digital twin to advance techniques of personalisation will likely be too tempting to ignore.  Personalisation is the tailoring of the digital world to respond to our personal needs, for instance using predictive algorithms to recommend what book you should read next or to suggest purchases to you.  At the moment, personalisation is driven by data derived from the tracks you make in the digital world as you surf the internet, watch videos and make purchases.  However, in the future, those predictive algorithms could be based on reading your mind, or at least its digital twin.  We worry about loss of privacy at the moment, by which we probably mean the collation of vast amounts of data about our lives by unaccountable organisations, and it worries us because of the potential for manipulation of our lives without us being aware it is happening.  Our free will is endangered by such manipulation but it might be lost entirely to a digital twin of our mind.  To quote the philosopher Michael Lynch, you would be handing over ‘privileged access to your mental states’ and to some extent you would no longer be a unique being.  We are long way from possessing the technology to realise a digital twin of human mind but the possibility is on the horizon.

Source: Richard Waters, They’re watching you, FT Weekend, 24/25 October 2020.

Image: Extract from abstract by Zahrah Resh.