Tag Archives: nuclear energy

More uncertainty about matter and energy

woodlandvalley

When I wrote about wave-particle duality and an electron possessing the characteristics of both matter and energy [see my post entitled ‘Electron uncertainty’ on July 27th, 2016], I dodged the issue of what are matter and energy.  As an engineer, I think of matter as being the solids, liquids and gases that are both manufactured and occur in nature.  We should probably add plasmas to this list, as they are created in an increasing number of engineering processes, including power generation using nuclear fission.  But maybe plasmas should be classified as energy, since they are clouds of unbounded charged particles, often electrons.   Matter is constructed from atoms and atoms from sub-atomic particles, such as electrons that can behave as particles or waves of energy.  So clearly, the boundary between matter and energy is blurred or fuzzy.  And, Einstein’s famous equation describes how energy and matter can be equated, i.e. energy is equal to mass times the speed of light squared.

Engineers tend to define energy as the capacity to do work, which is fine for manufactured or generated energy, but is inadequate when thinking about the energy of sub-atomic particles, which probably is why Feynman said we don’t really know what energy is.  Most of us think about energy as the stuff that comes down an electricity cable or that we get from eating a banana.  However, Evelyn Pielou points out in her book, The Nature of Energy, that energy in nature surrounds us all of the time, not just in the atmosphere or water flowing in rivers and oceans but locked into the structure of plants and rocks.

Matter and energy are human constructs and nature does not do rigid classifications, so perhaps we should think about a plant as a highly-organised localised zone of high density energy [see my post entitled ‘Fields of flowers‘ on July 8th, 2015].  We will always be uncertain about some things and as our ability to probe the world around us improves we will find that we are no longer certain about things we thought we understood.  For instance, research has shown that Bucky balls, which are spherical fullerene molecules containing sixty carbon atoms with a mass of 720 atomic mass units, and so seem to be quite substantial bits of matter, exhibit wave-particle duality in certain conditions.

We need to learn to accept uncertainty and appreciate the opportunities it presents to us rather than seek unattainable certainty.

Note: an atomic mass unit is also known as a Dalton and is equivalent to 1.66×10-27kg

Sources:

Pielou EC, The Energy of Nature, Chicago: The University of Chicago Press, 2001.

Arndt M, Nairz O, Vos-Andreae J, Keller C, van der Zouw G & Zeilinger A, Wave-particle duality of C60 molecules, Nature 401, 680-682 (14 October 1999).

 

Credibility is in the eye of the beholder

Picture1Last month I described how computational models were used as more than fables in many areas of applied science, including engineering and precision medicine [‘Models as fables’ on March 16th, 2016].  When people need to make decisions with socioeconomic and, or personal costs, based on the predictions from these models, then the models need to be credible.  Credibility is like beauty, it is in the eye of the beholder.   It is a challenging problem to convince decision-makers, who are often not expert in the technology or modelling techniques, that the predictions are reliable and accurate.  After all, a model that is reliable and accurate but in which decision-makers have no confidence is almost useless.  In my research we are interested in the credibility of computational mechanics models that are used to optimise the design of load-bearing structures, whether it is the frame of a building, the wing of an aircraft or a hip prosthesis.  We have techniques that allow us to characterise maps of strain using feature vectors [see my post entitled ‘Recognising strain‘ on October 28th, 2015] and then to compare the ‘distances’ between the vectors representing the predictions and measurements.  If the predicted map of strain  is an perfect representation of the map measured in a physical prototype, then this ‘distance’ will be zero.  Of course, this never happens because there is noise in the measured data and our models are never perfect because they contain simplifying assumptions that make the modelling viable.  The difficult question is how much difference is acceptable between the predictions and measurements .  The public expect certainty with respect to the performance of an engineering structure whereas engineers know that there is always some uncertainty – we can reduce it but that costs money.  Money for more sophisticated models, for more computational resources to execute the models, and for more and better quality measurements.

Models as fables

moel arthurIn his book, ‘Economic Rules – Why economics works, when it fails and how to tell the difference‘, Dani Rodrik describes models as fables – short stories that revolve around a few principal characters who live in an unnamed generic place and whose behaviour and interaction produce an outcome that serves as a lesson of sorts.  This seems to me to be a healthy perspective compared to the almost slavish belief in computational models that is common today in many quarters.  However, in engineering and increasingly in precision medicine, we use computational models as reliable and detailed predictors of the performance of specific systems.  Quantifying this reliability in a way that is useful to non-expert decision-makers is a current area of my research.  This work originated in aerospace engineering where it is possible, though expensive, to acquire comprehensive and information-rich data from experiments and then to validate models by comparing their predictions to measurements.  We have progressed to nuclear power engineering in which the extreme conditions and time-scales lead to sparse or incomplete data that make it more challenging to assess the reliability of computational models.  Now, we are just starting to consider models in computational biology where the inherent variability of biological data and our inability to control the real world present even bigger challenges to establishing model reliability.

Sources:

Dani Rodrik, Economic Rules: Why economics works, when it fails and how to tell the difference, Oxford University Press, 2015

Patterson, E.A., Taylor, R.J. & Bankhead, M., A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016

Hack, E., Lampeas, G. & Patterson, E.A., An evaluation of a protocol for the validation of computational solid mechanics models, J. Strain Analysis, 51(1):5-13, 2016.

Patterson, E.A., Challenges in experimental strain analysis: interfaces and temperature extremes, J. Strain Analysis, 50(5): 282-3, 2015

Patterson, E.A., On the credibility of engineering models and meta-models, J. Strain Analysis, 50(4):218-220, 2015

Small is beautiful and economic

tractorFarm tractors have been growing bigger and bigger, though perhaps not everywhere – the photograph was taken in Donegal, Ireland earlier this year.  The size of tractors is driven by the economics of needing a driver in the cab. The labour costs are high in many places, so that the productivity per tractor driver has to be high too.  Hence, the tractors have to move fast and process a large amount of the field on each pass.  This leads to enormous tractors that weigh a lot and exert a large pressure on the soil, which in turn results in between 1 and 3% of the farm land becoming unproductive because crops won’t grow in the severely compressed soil. But what happens if we eliminate the need for the driver by using autonomous vehicles? Then, we can have smaller vehicles working 24/7 that do less damage and are cheaper, which means that a single machine breakdown doesn’t bring work to halt. We can also contemplate tailoring the farming of each field to the local environmental and soil conditions instead a mono-crop one-size fits all approach. These are not my ideas but were espoused by Peter Cooke of the Queensland University of Technology at a recent meeting at the Royal Society on ‘Robotics and Autonomous Systems’.

It is a similar argument for modular nuclear power stations. Most of the world is intent on building enormous reactors capable of generating several GigaWatts of power (that’s typically 3 with nine zeros after it) at a cost of around £8 billion (that’s 8 with nine zeros) so about 50 pence per Watt. Such a massive amount of power requires a massive infrastructure to deliver the power to where it is need and a shutdown for maintenance or a breakdown potentially cuts power to about a million people. The alternative is small modular reactors built, and later dismantled, in a factory that leave an uncontaminated site at a lower capital cost and which provide a more flexible power feed into the national grid. Some commentators (see for example Editor’s comment in Professsional Engineer, November 2015)believe that a factory could be established and rolling modular reactors off its production line on the same timescale as building a GigaWatt station.

Regular readers will recognise a familiar theme found in Small is beautiful and affordable in nuclear powerstations on January 14th, 2015, Enabling or disruptive technology for nuclear engineering on January 28th, 2015 and Small is beautiful on October 10th, 2012; as well as the agricultural theme in Knowledge-economy on January 1st, 2014.