Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.

Source:

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Blended learning environments

This is the last in the series of posts on Creating A Learning Environment (CALE).  The series has been based on a workshop given periodically by Pat Campbell [of Campbell-Kibler Associates] and me in the UK and USA, except for the last one on ‘Learning problem-solving skills’ on October 24th, 2018 which was derived on talks I gave to students and staff in Liverpool.  In all of these posts, the focus has been on traditional forms of learning environments; however, almost everything that I have described can be transferred to a virtual learning environment, which is what I have done in the two MOOCs [see ‘Engaging learners on-line’ on May 25th, 2016 and ‘Slowing down time to think (about strain energy)’ on March 8th, 2017].

You can illustrate a much wider range of Everyday Engineering Examples on video than is viable in a lecture theatre.  So, for instance, I used my shower to engage the learners and to introduce a little statistical thermodynamics and explain how we can consider the average behaviour of a myriad of atoms.  However, it is not possible to progress through 5Es [see ‘Engage, Explore, Explain, Elaborate and Evaluate’ on August 1st, 2018] in a single step of a MOOC; so, instead I used a step (or sometimes two steps) of the MOOC to address each ‘E’ and cycled around the 5Es about twice per week.  This approach provides an effective structure for the MOOC which appears to have been a significant factor in achieving higher completion rates than in most MOOCs.

In the MOOC, I extended the Everyday Engineering Example concept into experiments set as homework assignments using kitchen equipment.  For instance, in one lab students were asked to measure the efficiency of their kettle.  In another innovation, we developed Clear Screen Technology to allow me to talk to the audience while solving a worked example.  In the photo below, I am calculating the Gibbs energy in the tank of a compressed air powered car in the final week of the MOOC [where we began to transition to more sophisticated examples].

Last academic year, I blended the MOOC on thermodynamics with my traditional first year module by removing half the lectures, the laboratory classes and worked example classes from the module.  They were replaced by the video shorts, homework labs and Clear Screen Technology worked examples respectively from the MOOC.  The results were positive with an increased attendence at lectures and an improved performance in the examination; although some students did not like and did not engage with the on-line material.

Photographs are stills from the MOOC ‘Energy: Thermodynamics in Everyday Life’.

CALE #10 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA] – although this post is based on recent experience in developing and delivering a MOOC integrated with traditional learning environments.

Knowledge is power

Pitt Rivers Museum, Oxford

“The list of things that I believe is, if not infinite, virtually endless. And I am finite.  Though I can readily imagine what I would have to do to obtain evidence that would support anyone of my beliefs, I cannot imagine being able to do this for all of my beliefs.  I believe too much, there is too much relevant evidence (much of it available only after extensive, specialized training); intellect is too small and life is too short.”

These words are a direct quote from the opening paragraph of an article by John Hardwig published in the Journal of Philosophy in 1985. He goes on to argue that we can have good reasons for believing something if we have good reasons for believing that others have good reasons to believe it.  So, it is reasonable for a layperson to believe something that an expert also believes and that it is even rational to refuse to think for ourselves in these circumstances.  Because life is too short and there are too many other things to think about.

This implies a high level of trust in the expert as well as a concept of knowledge that is known by the community.  Someone somewhere has the evidence to support the knowledge.  For instance, as a professor, I am trusted by my students to provide them with knowledge for which I have the supporting evidence or I believe someone else has the evidence.  This trust is reinforced to a very small extent by replicating the evidence in practical classes.

More than 30 years ago, John Hardwig concluded his article by worrying about the extent to which wisdom is based on trust and the threat to “individual autonomy and responsibility, equality and democracy” posed by our dependence on others for knowledge.  Today, the internet has given us access to, if not infinite, virtually endless information.  Unfortunately, much of the information available is inaccurate, incomplete and biased, sometimes due to self-interest.  Our problem is sifting the facts from the fabrications; and identifying who are experts and can be trusted as sources of knowledge.  This appears to be leading to a crisis of trust in both experts and what constitutes the body of knowledge known by the community, which is threatening our democracies and undermining equality.

Source:

Hardwig J, Epistemic dependence, J. Philosophy, 82(7):335-349, 1985.

Wading in reflections

I have written before about Daniel Goleman’s analysis of leadership styles [see ‘Clueless on leadership style‘ on June 14th, 2017]; to implement these styles, he identifies, four competencies you require: self-awareness, self-management, social awareness and relationship management.  Once again, I am involved in teaching helping people develop these competencies through our Science & Technology Leadership CPD programme for aspiring leaders in Research & Development [R&D].  As part of the module on Science Leadership and Ethics we have asked our delegates to write a short essay reflecting on the ethics of one or two real events and, either from experience or vicariously, on the leadership associated with them.  Our delegates find this challenging, especially the reflective aspect which is designed to induce them to think about their self, their feelings and their reactions to events.  They are technologists who are used to writing objectively in technical reports and the concept of writing about the inner workings of their mind is alien to them.

Apparently, the author Peter Carey compared writing to ‘wading in the flooded basement of my mind’ and, to stretch the analogy, I suspect that our delegates are worried about getting out of their depth or perhaps they haven’t found the stairs to the basement yet.  We try to help by providing a map in the form of the flowchart in the thumbnail together with the references below.  Nevertheless, this assignment remains an exercise that most undertake by standing at the top of the stairs with a weak flashlight and that few both get their feet wet and tell us what they find in the basement.

References:

A short guide to reflective writing, University of Birmingham, Library Services Academic Skills Centre, https://intranet.birmingham.ac.uk/as/libraryservices/library/skills/asc/documents/public/Short-Guide-Reflective-Writing.pdf

http://www.bbc.co.uk/bitesize/intermediate2/english/folio/personal_reflective_essay/revision/1/

Sources:

Image: https://www.pinterest.co.uk/pin/589901251161855637/

Goleman D, Boyatzis R & McKee A, The new leaders: transforming the art of leadership into the science of results, London: Sphere, 2002.

Dickson A, Books do furnish a lie, FT Weekend, 18/19 August 2018.