Tag Archives: predictive toxicology

Credible predictions for regulatory decision-making

detail from abstract by Zahrah ReshRegulators are charged with ensuring that manufactured products, from aircraft and nuclear power stations to cosmetics and vaccines, are safe.  The general public seeks certainty that these devices and the materials and chemicals they are made from will not harm them or the environment.  Technologists that design and manufacture these products know that absolute certainty is unattainable and near-certainty in unaffordable.  Hence, they attempt to deliver the service or product that society desires while ensuring that the risks are As Low As Reasonably Practical (ALARP).  The role of regulators is to independently assess the risks, make a judgment on their acceptability and thus decide whether the operation of a power station or distribution of a vaccine can go ahead.  These are difficult decisions with huge potential consequences – just think of the more than three hundred people killed in the two crashes of Boeing 737 Max airplanes or the 10,000 or so people affected by birth defects caused by the drug thalidomide.  Evidence presented to support applications for regulatory approval is largely based on physical tests, for example fatigue tests on an aircraft structure or toxicological tests using animals.  In some cases the physical tests might not be entirely representative of the real-life situation which can make it difficult to make decisions using the data, for instance a ground test on an airplane is not the same as a flight test and in many respects the animals used in toxicity testing are physiologically different to humans.  In addition, physical tests are expensive and time-consuming which both drives up the costs of seeking regulatory approval and slows down the translation of new innovative products to the market.  The almost ubiquitous use of computer-based simulations to support the research, development and design of manufactured products inevitably leads to their use in supporting regulatory applications.  This creates challenges for regulators who must judge the trustworthiness of predictions from these simulations.  [see ‘Fake facts & untrustworthy predictions‘ on December 4th, 2019]. It is standard practice for modellers to demonstrate the validity of their models; however, validation does not automatically lead to acceptance of predictions by decision-makers.  Acceptance is more closely related to scientific credibility.  I have been working across a number of disciplines on the scientific credibility of models including in engineering where multi-physics phenomena are important, such as hypersonic flight and fusion energy [see ‘Thought leadership in fusion energy‘ on October 9th, 2019], and in computational biology and toxicology [see ‘Hierarchical modelling in engineering and biology‘ on March 14th, 2018]. Working together with my collaborators in these disciplines, we have developed a common set of factors which underpin scientific credibility that are based on principles drawn from the literature on the philosophy of science and are designed to be both discipline-independent and method-agnostic [Patterson & Whelan, 2019; Patterson et al, 2021]. We hope that our cross-disciplinary approach will break down the subject-silos that have become established as different scientific communities have developed their own frameworks for validating models.  As mentioned above, the process of validation tends to be undertaken by model developers and, in some sense, belongs to them; whereas, credibility is not exclusive to the developer but is a trust that needs to be shared with a decision-maker who seeks to use the predictions to inform their decision [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  Trust requires a common knowledge base and understanding that is usually built through interactions.  We hope the credibility factors will provide a framework for these interactions as well as a structure for building a portfolio of evidence that demonstrates the reliability of a model. 

References:

Patterson EA & Whelan MP, On the validation of variable fidelity multi-physics simulations, J. Sound & Vibration, 448:247-258, 2019.

Patterson EA, Whelan MP & Worth A, The role of validation in establishing the scientific credibility of predictive toxicology approaches intended for regulatory application, Computational Toxicology, 17: 100144, 2021.

Image: Extract from abstract by Zahrah Resh.

Fake facts & untrustworthy predictions

I need to confess to writing a misleading post some months ago entitled ‘In Einstein’s footprints?‘ on February 27th 2019, in which I promoted our 4th workshop on the ‘Validation of Computational Mechanics Models‘ that we held last month at Guild Hall of Carpenters [Zunfthaus zur Zimmerleuten] in Zurich.  I implied that speakers at the workshop would be stepping in Einstein’s footprints when they presented their research at the workshop, because Einstein presented a paper at the same venue in 1910.  However, as our host in Zurich revealed in his introductory remarks , the Guild Hall was gutted by fire in 2007 and so we were meeting in a fake, or replica, which was so good that most of us had not realised.  This was quite appropriate because a theme of the workshop was enhancing the credibility of computer models that are used to replicate the real-world.  We discussed the issues surrounding the trustworthiness of models in a wide range of fields including aerospace engineering, biomechanics, nuclear power and toxicology.  Many of the presentations are available on the website of the EU project MOTIVATE which organised and sponsored the workshop as part of its dissemination programme.  While we did not solve any problems, we did broaden people’s understanding of the issues associated with trustworthiness of predictions and identified the need to develop common approaches to support regulatory decisions across a range of industrial sectors – that’s probably the theme for our 5th workshop!

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660 and the Swiss State Secretariat for Education, Research and Innovation under contract number 17.00064.

The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.

Image: https://www.tagesanzeiger.ch/Zunfthaus-Zur-Zimmerleuten-Wiederaufbauprojekt-steht/story/30815219

 

Hierarchical modelling in engineering and biology

In the 1979 Glenn Harris proposed an analytical hierarchy of models for estimating tactical force effectiveness for the US Army which was represented as a pyramid with four layers with a theatre/campaign simulation at the apex supported by mission level simulations below which was engagement model and engineering models of assets/equipment at the base.  The idea was adopted by the aerospace industry [see the graphic on the left] who place the complete aircraft on the apex supported by systems, sub-systems and components beneath in increasing numbers with the pyramid divided vertically in half to represent physical tests on one side and simulations on the other.  This represents the need to validate predictions from computational models with measurements in the real-world [see post on ‘Model validation‘ on September 18th, 2012]. These diagrams are schematic representations used by engineers to plan and organise the extensive programmes of modelling and physical testing undertaken during the design of new aircraft [see post on ‘Models as fables‘ on March 16th, 2016].  The objective of the MOTIVATE research project is to reduce quantity and increase the quality of the physical tests so that pyramid becomes lop-sided, i.e. the triangle representing the experiments and tests is a much thinner slice than the one representing the modelling and simulations [see post on ‘Brave New World‘ on January 10th, 2018].

At the same time, I am working with colleagues in toxicology on approaches to establishing credibility in predictive models for chemical risk assessment.  I have constructed an equivalent pyramid to represent the system hierarchy which is shown on the right in the graphic.  The challenge is the lack of measurement data in the top left of the pyramid, for both moral and legal reasons, which means that there is very limited real-world data available to confirm the predictions from computational models represented on the right of the pyramid.  In other words, my colleagues in toxicology, and computational biology in general, are where my collaborators in the aerospace industry would like to be while my collaborators in the aerospace want to be where the computational biologists find themselves already.  The challenge is that in both cases a paradigm shift is required from objectivism toward relativism;  since, in the absence of comprehensive real-world measurement data, validation or confirmation of predictions becomes a social process involving judgement about where the predictions lie on a continuum of usefulness.

Sources:

Harris GL, Computer models, laboratory simulators, and test ranges: meeting the challenge of estimating tactical force effectiveness in the 1980’s, US Army Command and General Staff College, May 1979.

Trevisani DA & Sisti AF, Air Force hierarchy of models: a look inside the great pyramid, Proc. SPIE 4026, Enabling Technology for Simulation Science IV, 23 June 2000.

Patterson EA & Whelan MP, A framework to establish credibility of computational models in biology, Progress in Biophysics and Molecular Biology, 129:13-19, 2017.