Tag Archives: knowledge

Digital twins that thrive in the real-world

Decorative image

Windows of the Soul II [3D video art installation: http://www.haigallery.com/sonia-falcone/%5D

Digital twins are becoming ubiquitous in many areas of engineering [see ‘Can you trust your digital twin?‘ on November 23rd, 2016].  Although at the same time, the terminology is becoming blurred as digital shadows and digital models are treated as if they are synonymous with digital twins.  A digital model is a digitised replica of physical entity which lacks any automatic data exchange between the entity and its replica.  A digital shadow is the digital representation of a physical object with a one-way flow of information from the object to its representation.  But a digital twin is a functional representation with a live feedback loop to its counterpart in the real-world.  The feedback loop is based on a continuous update to the digital twin about the condition and performance of the physical entity based on data from sensors and on analysis from the digital twin about the performance of the physical entity.  This enables a digital twin to provide a service to many stakeholders.  For example, the users of a digital twin of an aircraft engine could include the manufacturer, the operator, the maintenance providers and the insurers.  These capabilities imply digital twins are themselves becoming products which exist in a digital context that might connect many digital products thus forming an integrated digital environment.  I wrote about integrated digital environments when they were a concept and the primary challenges were technical in nature [see ‘Enabling or disruptive technology for nuclear engineering?‘ on January 28th, 2015].  Many of these technical challenges have been resolved and the next set of challenges are economic and commercial ones associated with launching digital twins into global markets that lack adequate understanding, legislation, security, regulation or governance for digital products.  In collaboration with my colleagues at the Virtual Engineering Centre, we have recently published a white paper, entitled ‘Transforming digital twins into digital products that thrive in the real world‘ that reviews these issues and identifies the need to establish digital contexts that embrace the social, economic and technical requirements for the appropriate use of digital twins [see ‘Digital twins could put at risk what it means to be human‘ on November 18th, 2020].

Going against the flow

Decorative photograph of a mountain riverLast week I wrote about research we have been carrying out over the last decade that is being applied to large scale structures in the aerospace industry (see ‘Slowly crossing the valley of death‘ on January 27th, 2021). I also work on very much smaller ‘structures’ that are only tens of nanometers in diameter, or about a billion times smaller than the test samples in last week’s post (see ‘Toxic nanoparticles?‘ on November 13th, 2013). The connection is the use of light to measure shape, deformation and motion; and then utilising the measurements to validate predictions from theoretical or computational models. About three years ago, we published research which demonstrated that the motion of very small particles (less than about 300 nanometres) at low concentrations (less than about a billion per millilitre) in a fluid was dominated by the molecules of the fluid rather than interactions between the particles (see Coglitore et al, 2017 and ‘Slow moving nanoparticles‘ on December 13th, 2017). This data confirmed results from earlier molecular dynamic simulations that contradicted predictions using the Stokes-Einstein equation, which was derived by Einstein in his PhD thesis for a ‘Stokes’ particle undergoing Brownian motion. The Stokes-Einstein equation works well for large particles but the physics of motion changes when the particles are very small and far apart so that Van der Waals forces and electrostatic forces play a dominant role, as we have shown in a more recent paper (see Giorgi et al, 2019).  This becomes relevant when evaluating nanoparticles as potential drug delivery systems or assessing the toxicological impact of nanoparticles.  We have shown recently that instruments based on dynamic scattering of light from nanoparticles are likely to be inaccurate because they are based on fitting measurement data to the Stokes-Einstein equation.  In a paper published last month, we found that asymmetric flow field flow fractionation (or AF4)  in combination with dynamic light scattering when used to detect the size of nanoparticles in suspension, tended to over-estimate the diameter of particles smaller than 60 nanometres at low concentrations by upto a factor of two (see Giorgi et al, 2021).  Someone commented recently that our work in this area was not highly cited but perhaps this is unsurprising when it undermines a current paradigm.  We have certainly learnt to handle rejection letters, to redouble our efforts to demonstrate the rigor in our research and to present conclusions in a manner that appears to build on existing knowledge rather than demolishing it.

Sources:

Coglitore, D., Edwardson, S.P., Macko, P., Patterson, E.A. and Whelan, M., 2017. Transition from fractional to classical Stokes–Einstein behaviour in simple fluids. Royal Society open science, 4(12), p.170507.

Giorgi, F., Coglitore, D., Curran, J.M., Gilliland, D., Macko, P., Whelan, M., Worth, A. and Patterson, E.A., 2019. The influence of inter-particle forces on diffusion at the nanoscale. Scientific reports, 9(1), pp.1-6.

Giorgi, F., Curran, J.M., Gilliland, D., La Spina, R., Whelan, M.P. & Patterson, E.A. 2021, Limitations of nanoparticles size characterization by asymmetric flow field-fractionation coupled with online dynamic light scattering, Chromatographia, doi.org/10/1007/s10337-020-03997-7.

Image is a photograph of a fast flowing mountain river taken in Yellowstone National Park during a roadtrip across the USA in 2006.

We are drowning in information while starving for wisdom

Decorative image: Lake Maggiore from AngeraThe title of this post is a quote from Edward O. Wilson’s book ‘Consilience: The Unity of Knowledge‘. For example, if you search for scientific papers about “Entropy” then you will probably find more than 3.5 million. An impossible quantity for an individual to read and even when you narrow the search to those about “psychological entropy”, which is a fairly niche topic, you will still find nearly 500 papers – a challenging reading list for most people.  The analysis of the trends embedded in scientific papers has become a research activity in its own right, see for example Basurto-Flores et al 2018 on papers about entropy; however, this type of analysis seems to generate yet more information rather than wisdom.  In this context, wisdom is associated with insight based on knowledge and experience; however the quality of the experiences is important as well as the processes of self-reflection (see Nicholas Weststrate’s PhD thesis).  There are no prizes for wisdom and we appoint and promote researchers based on their publication record; hence it is unsurprising that editors of journals are swamped by thousands of manuscripts submitted for publication with more than 2 million papers published every year.  The system is out of control driven by authors building a publication list longer than their competitors for jobs, promotion and grant funding and by publishers seeking larger profits from publishing more and bigger journals.  There are so many manuscripts submitted to journals that the quality of the reviewing and editing is declining leading to both false positive and false negatives, i.e. papers being published that contain little, if any, original content or lacking sufficient evidence to support their conclusions  and highly innovative papers being rejected because they are perceived to be wrong rather than simply deviating from the current paradigm. The drop in quality and rise in quantity of papers published makes keeping up with the scientific literature both expensive and inefficient in terms of time and energy, which slows down acquisition of knowledge and leaves less time for reflection and gaining experiences that are prerequisites for wisdom. So what incentives are there for a scientist or engineer to aspire to be wise given the lack of prizes and career rewards for wisdom?  In Chinese thought wisdom is perceived as expertise in the art of living, the ability to grasp what is happening, and to adjust to the imminent future (Simandan, 2018).  All of these attributes seem to be advantageous to a career based on solving problems but you need the sagacity to realise that the rewards are indirect and often intangible.

References:

Basurto-Flores, R., Guzmán-Vargas, L., Velasco, S., Medina, A. and Hernandez, A.C., 2018. On entropy research analysis: cross-disciplinary knowledge transfer. Scientometrics, 117(1), pp.123-139.

Simandan, D., 2018. Wisdom and foresight in Chinese thought: sensing the immediate future. Journal of Futures Studies, 22(3), pp.35-50.

Nicholas M Weststrate, The examined life: relations amoong life experience, self-reflection and wisdom, PhD Thesis, University of Toronto, 2017.

Edward O. Wilson, Consilience: the unity of knowledge, London, Little Brown and Company, 1998.

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.