Tag Archives: Royal Society

Thought leadership in fusion engineering

The harnessing of fusion energy has become something of a holy grail – sought after by many without much apparent progress.  It is the energy process that ‘powers’ the stars and if we could reproduce it on earth in a controlled environment then it would offer almost unlimited energy with very low environmental costs.  However, understanding the science is an enormous challenge and the engineering task to design, build and operate a fusion-fuelled power station is even greater.  The engineering difficulties originate from the combination of two factors: the emergent behaviour present in the complex system and that it has never been done before.  Engineering has achieved lots of firsts but usually through incremental development; however, with fusion energy it would appear that it will only work when all of the required conditions are present.  In other words, incremental development is not viable and we need everything ready before flicking the switch.  Not surprisingly, engineers are cautious about flicking switches when they are not sure what will happen.  Yet, the potential benefits of getting it right are huge; so, we would really like to do it.  Hence, the holy grail status: much sought after and offering infinite abundance.

Last week I joined the search, or at least offered guidance to those searching, by publishing an article in Royal Society Open Science on ‘An integrated digital framework for the design, build and operation of fusion power plants‘.  Working with colleagues at the Culham Centre for Fusion Energy, Richard Taylor and I have taken our earlier work on an integrated nuclear digital environment for the nuclear energy using fission [see ‘Enabling or disruptive technology for nuclear engineering?‘ on january 28th, 2015] and combined it with the hierarchical pyramid of testing and simulation used in the aerospace industry [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018] to create a framework that can be used to guide the exploration of large design domains using computational models within a distributed and collaborative community of engineers and scientists.  We hope it will shorten development times, reduce design and build costs, and improve credibility, operability, reliability and safety.  It is a long list of potential benefits for a relatively simple idea in a relatively short paper (only 12 pages).  Follow the link to find out more – it is an open access paper, so it’s free.

References

Patterson EA, Taylor RJ & Bankhead M, A framework for an integrated nuclear digital environment, Progress in Nuclear Energy, 87:97-103, 2016.

Patterson EA, Purdie S, Taylor RJ & Waldon C, An integrated digital framework for the design, build and operation of fusion power plants, Royal Society Open Science, 6(10):181847, 2019.

Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.

Source:

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Third time lucky

At the end of last year my research group had articles published by the Royal Society’s journal  Open Science in two successive months [see ‘Press Release!‘ on November 15th, 2017 and ‘Slow moving nanoparticles‘ on December 13th, 2017].  I was excited about both publications because I had only had one article published before by the Royal Society and because the Royal Society issues a press release whenever it publishes a new piece of science.  However, neither press release generated any interest from anyone; probably because science does not sell newspapers (or attract viewers) unless it is bad news or potentially life-changing.  And our work on residual stress around manufactured holes in aircraft or on the motion of nanoparticles does not match either of these criteria.

Last month, we did it again with an article on ‘An experimental study on the manufacture and characterization of in-plane fibre-waviness defects in composites‘.  Third time lucky, because this time our University press office were interested enough to write a piece for the news page of the University website, entitled ‘Engineers develop new method to recreate fibre waviness defects in lab‘.  Fibre waviness is an issue in the manufacture of structural components of aircraft using carbon fibre reinforced composites because kinks or waves in the fibres can cause structural weaknesses.  As part of his PhD, supported by Airbus and the UK Engineering and Physical Sciences Research Council (EPSRC), Will Christian developed an innovative technique to generate defects in our lab so that we can gain a better understanding of them. Read the article or the press release to find out more!

Image shows fracture through a waviness-defect in the top-ply of a carbon-fibre laminate observed in a microscope following sectioning after failure.

Reference:

Christian WJR, DiazDelaO FA, Atherton K & Patterson EA, An experimental study on the manufacture and characterisation of in-plane fibre-waviness defects in composites, R. Soc. open sci. 5:180082, 2018.

Slow moving nanoparticles

Random track of a nanoparticle superimposed on its image generated in the microscope using a pin-hole and narrowband filter.

A couple of weeks ago I bragged about research from my group being included in a press release from the Royal Society [see post entitled ‘Press Release!‘ on November 15th, 2017].  I hate to be boring but it’s happened again.  Some research that we have been performing with the European Union’s Joint Research Centre in Ispra [see my post entitled ‘Toxic nanoparticles‘ on November 13th, 2013] has been published this morning by the Royal Society Open Science.

Our experimental measurements of the free motion of small nanoparticles in a fluid have shown that they move slower than expected.  At low concentrations, unexpectedly large groups of molecules in the form of nanoparticles up to 150-300nm in diameter behave more like an individual molecule than a particle.  Our experiments support predictions from computer simulations by other researchers, which suggest that at low concentrations the motion of small nanoparticles in a fluid might be dominated by van der Waals forces rather the thermal motion of the surrounding molecules.  At the nanoscale there is still much that we do not understand and so these findings will have potential implications for predicting nanoparticle transport, for instance in drug delivery [e.g., via the nasal passage to the central nervous system], and for understanding enhanced heat transfer in nanofluids, which is important in designing systems such as cooling for electronics, solar collectors and nuclear reactors.

Our article’s title is ‘Transition from fractional to classical Stokes-Einstein behaviour in simple fluids‘ which does not reveal much unless you are familiar with the behaviour of particles and molecules.  So, here’s a quick explanation: Robert Brown gave his name to the motion of particles suspended in a fluid after reporting the random motion or diffusion of pollen particles in water in 1828.  In 1906, Einstein postulated that the motion of a suspended particle is generated by the thermal motion of the surrounding fluid molecules.  While Stokes law relates the drag force on the particle to its size and fluid viscosity.  Hence, the Brownian motion of a particle can be described by the combined Stokes-Einstein relationship.  However, at the molecular scale, the motion of individual molecules in a fluid is dominated by van der Waals forces, which results in the size of the molecule being unimportant and the diffusion of the molecule being inversely proportional to a fractional power of the fluid viscosity; hence the term fractional Stokes-Einstein behaviour.  Nanoparticles that approach the size of large molecules are not visible in an optical microscope and so we have tracked them using a special technique based on imaging their shadow [see my post ‘Seeing the invisible‘ on October 29th, 2014].

Source:

Coglitore D, Edwardson SP, Macko P, Patterson EA, Whelan MP, Transition from fractional to classical Stokes-Einstein behaviour in simple fluids, Royal Society Open Science, 4:170507, 2017. doi: