Category Archives: Engineering

Million to one

‘All models are wrong, but some are useful’ is a quote, usually attributed to George Box, that is often cited in the context of computer models and simulations.  Working out which models are useful can be difficult and it is essential to get it right when a model is to be used to design an aircraft, support the safety case for a nuclear power station or inform regulatory risk assessment on a new chemical.  One way to identify a useful model to assess its predictions against measurements made in the real-world [see ‘Model validation’ on September 18th, 2012].  Many people have worked on validation metrics that allow predicted and measured signals to be compared; and, some result in a statement of the probability that the predicted and measured signal belong to the same population.  This works well if the predictions and measurements are, for example, the temperature measured at a single weather station over a period of time; however, these validation metrics cannot handle fields of data, for instance the map of temperature, measured with an infrared camera, in a power station during start-up.  We have been working on resolving this issue and we have recently published a paper on ‘A probabilistic metric for the validation of computational models’.  We reduce the dimensionality of a field of data, represented by values in a matrix, to a vector using orthogonal decomposition [see ‘Recognizing strain’ on October 28th, 2015].  The data field could be a map of temperature, the strain field in an aircraft wing or the topology of a landscape – it does not matter.  The decomposition is performed separately and identically on the predicted and measured data fields to create to two vectors – one each for the predictions and measurements.  We look at the differences in these two vectors and compare them against the uncertainty in the measurements to arrive at a probability that the predictions belong to the same population as the measurements.  There are subtleties in the process that I have omitted but essentially, we can take two data fields composed of millions of values and arrive at a single number to describe the usefulness of the model’s predictions.

Our paper was published by the Royal Society with a press release but in the same week as the proposed Brexit agreement and so I would like to think that it was ignored due to the overwhelming interest in the political storm around Brexit rather than its esoteric nature.

Source:

Dvurecenska K, Graham S, Patelli E & Patterson EA, A probabilistic metric for the validation of computational models, Royal Society Open Science, 5:1180687, 2018.

Blended learning environments

This is the last in the series of posts on Creating A Learning Environment (CALE).  The series has been based on a workshop given periodically by Pat Campbell [of Campbell-Kibler Associates] and me in the UK and USA, except for the last one on ‘Learning problem-solving skills’ on October 24th, 2018 which was derived on talks I gave to students and staff in Liverpool.  In all of these posts, the focus has been on traditional forms of learning environments; however, almost everything that I have described can be transferred to a virtual learning environment, which is what I have done in the two MOOCs [see ‘Engaging learners on-line’ on May 25th, 2016 and ‘Slowing down time to think (about strain energy)’ on March 8th, 2017].

You can illustrate a much wider range of Everyday Engineering Examples on video than is viable in a lecture theatre.  So, for instance, I used my shower to engage the learners and to introduce a little statistical thermodynamics and explain how we can consider the average behaviour of a myriad of atoms.  However, it is not possible to progress through 5Es [see ‘Engage, Explore, Explain, Elaborate and Evaluate’ on August 1st, 2018] in a single step of a MOOC; so, instead I used a step (or sometimes two steps) of the MOOC to address each ‘E’ and cycled around the 5Es about twice per week.  This approach provides an effective structure for the MOOC which appears to have been a significant factor in achieving higher completion rates than in most MOOCs.

In the MOOC, I extended the Everyday Engineering Example concept into experiments set as homework assignments using kitchen equipment.  For instance, in one lab students were asked to measure the efficiency of their kettle.  In another innovation, we developed Clear Screen Technology to allow me to talk to the audience while solving a worked example.  In the photo below, I am calculating the Gibbs energy in the tank of a compressed air powered car in the final week of the MOOC [where we began to transition to more sophisticated examples].

Last academic year, I blended the MOOC on thermodynamics with my traditional first year module by removing half the lectures, the laboratory classes and worked example classes from the module.  They were replaced by the video shorts, homework labs and Clear Screen Technology worked examples respectively from the MOOC.  The results were positive with an increased attendence at lectures and an improved performance in the examination; although some students did not like and did not engage with the on-line material.

Photographs are stills from the MOOC ‘Energy: Thermodynamics in Everyday Life’.

CALE #10 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA] – although this post is based on recent experience in developing and delivering a MOOC integrated with traditional learning environments.

Learning problem-solving skills

Inukshuk: meaning ‘in the likeness of a human’ in the Inuit language. A traditional symbol meaning ‘someone was here’ or ‘you are on the right path’.

One definition of engineering given in the Oxford English Dictionary is ‘the action of working artfully to bring something about’.  This action usually requires creative problem-solving which is a common skill possessed by all engineers regardless of their field of specialisation.  In many universities, students acquire this skill though solving example problems set by their instructors and supported by example classes and, or tutorials.

In my lectures, I solve example problems in class using a pen and paper combined with a visualiser and then give the students a set of problems to solve themselves.  The answers but not the solutions are provided; so that students know when they have arrived at the correct answer but not how to get there.  Students find this difficult and complain because I am putting the emphasis on their learning of problem-solving skills which requires considerable effort by them.  There are no short-cuts – it’s a process of deep-learning [see ‘Deep long-term learning’ on April 18th, 2018].

Research shows that students tend to jump into algebraic manipulation of equations whereas experts experiment to find the best approach to solving a problem.  The transition from student to skilled problem-solver requires students to become comfortable with the slow and uncertain process of creating representations of the problem and exploring the possible approaches to the solution [Martin & Schwartz, 2014].  And, it takes extensive practice to develop these problem-solving skills [Martin & Schwartz, 2009].  For instance, it is challenging to persuade students to sketch a representation of the problem that they are trying to solve [see ‘Meta-representational competence’ on May 13th, 2015].  Working in small groups with a tutor or a peer-mentor is an effective way of supporting students in acquiring these skills.  However, it is important to ensure that the students are engaged in the problem-solving so that the tutor acts as consultant or a guide who is not directly involved in solving the problem but can give students confidence that they are on the right path.

[Footnote: a visualiser is the modern equivalent of an OverHead Projector (OHP) which instead of projecting optically uses a digital camera and projector.  It’s probably deserves to be on the Mindset List since it is one of those differences between a professor’s experience as a student and our students’ experience [see ‘Engineering idiom’ on September 12th, 2018]].

References:

Martin L & Schwartz DL, A pragmatic perspective on visual representation and creative thinking, Visual Studies, 29(1):80-93, 2014.

Martin L & Schwartz DL, Prospective adaptation in the use of external representations, Cognition and Instruction, 27(4):370-400, 2009.

 

CALE #9 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA] – although this post is based on an introduction to tutorials given to new students and staff at the University of Liverpool in 2015 & 2016.

Photo: ILANAAQ_Whistler by NordicLondon (CC BY-NC 2.0) https://www.flickr.com/photos/25408600@N00/189300958/

Aircraft inspection

A few months I took this series of photographs while waiting to board a trans-Atlantic flight home.  First, a small ladder was placed in front of the engine.  Then a technician arrived, climbed onto the ladder and spread a blanket on the cowling before kneeling on it and spinning the fan blades slowly.  He must have spotted something that concerned him because he climbed in, lay on the blanket and made a closer inspection.  Then he climbed down, rolled up the blanket and left.  A few minutes later he returned with a colleague, laid out the blanket and they both had a careful look inside the engine, after which they climbed down, rolled up the blanket put it back in a special bag and left.  Five or ten minutes later, they were back with a third colleague.  The blanket was laid out again, the engine inspected by two of them at once and a three-way discussion ensued.  The result was that our flight was postponed while the airline produced a new plane for us.

Throughout this process it appeared that the most sophisticated inspection equipment used was the human eye and a mobile phone.  I suspect that the earlier inspections were reported by phone to the supervisor who came to look for himself before making the decision.  One of the goals of our current research is to develop easy-to-use instrumentation that could be used to provide more information about the structural integrity of components in this type of situation.  In the INSTRUCTIVE project we are investigating the use of low-cost infra-red cameras to identify incipient damage in aerospace structures.  Our vision is that the sort of inspection described above could be performed using an infra-red camera that would provide detailed data about the condition of the structure.  This data would update a digital twin that, in turn, would provide a prognosis for the structure.  The motivation is to improve safety and reduce operating costs by accurate identification of critical damage.