Category Archives: energy science

Creating an evolving learning environment

A couple of weeks ago, I wrote about marking examinations and my tendency to focus on the students that I had failed to teach rather than those who excelled in their knowledge of problem-solving with the laws of thermodynamics [see my post ‘Depressed by exams‘ on January 31st, 2018].  One correspondent suggested that I shouldn’t beat myself up because ‘to teach is to show, to learn is to acquire‘; and that I had not failed to show but that some of my students had failed to acquire.  However, Adams and Felder have stated that the ‘educational role of faculty is not to impart knowledge; but to design learning environments that support knowledge acquisition‘.  My despondency arises from my apparent inability to create a learning environment that supports and encourages knowledge acquisition for all of my students.  People arrive in my class with a variety of formative experiences and different ways of learning, which makes it challenging to generate a learning environment that is effective for everyone.   It’s an on-going challenge due to the ever-widening cultural gap between students and their professors, which is large enough to have warranted at least one anthropological study (see My Freshman Year by Rebekah Nathan). So, my focus on the weaker exam scripts has a positive outcome because it causes me to think about evolving the learning environment.

Sources:

Adams RS, Felder RM, Reframing professional development: A systems approach to preparing engineering educators to educate tomorrow’s engineers. J. Engineering Education, 97(3):230-240, 2008.

Nathan R, My freshman year: what a professor learned by becoming a student, Cornell University Press, Ithaca, New York, 2005

Depressed by exams

I am not feeling very creative this week, because I am in middle of marking examination scripts; so, this post is going to be short.  I have 20 days to grade at least 1100 questions and award a maximum of 28,400 marks – that’s a lot of decisions for my neurons to handle without being asked to find new ways to network and generate original thoughts for this blog [see my post on ‘Digital hive mind‘ on November 30th, 2016].

It is a depressing task discovering how little I have managed to teach students about thermodynamics, or maybe I should say, how little they have learned.  However, I suspect these feelings are a consequence of the asymmetry of my brain, which has many more sites capable of attributing blame and only one for assigning praise [see my post entitled ‘Happenstance, not engineering‘ on November 9th, 2016].  So, I tend to focus on the performance of the students at the lower end of the spectrum rather than the stars who spot the elegant solutions to the exam problems.

Sources:

Ngo L, Kelly M, Coutlee CG, Carter RM , Sinnott-Armstrong W & Huettel SA, Two distinct moral mechanisms for ascribing and denying intentionality, Scientific Reports, 5:17390, 2015.

Bruek H, Human brains are wired to blame rather than to praise, Fortune, December 4th 2015.

Entropy on the brain

It was the worst of times, it was the worst of times.  Again.  That’s the things about things.  They fall apart, always have, always will, it’s in their nature.’  They are the opening three lines of Ali Smith’s novel ‘Autumn’.  Ali Smith doesn’t mention entropy but that’s what she is describing.

My first-year lecture course has progressed from the first law of thermodynamics to the second law; and so, I have been stretching the students’ brains by talking about entropy.  It’s a favourite topic of mine but many people find it difficult.  Entropy can be described as the level of disorder present in a system or the environment.  Ludwig Boltzmann derived his famous equation, S=k ln W, which can be found on his gravestone – he died in 1906.  S is entropy, k is a constant of proportionality named after Boltzmann, and W is the number of arrangements in which a system can be arranged without changing its energy content (ln means natural logarithm).  So, the more arrangements that are possible then the larger is the entropy.

By now the neurons in your brain should be firing away nicely with a good level of synchronicity (see my post entitled ‘Digital hive mind‘ on November 30th, 2016 and ‘Is the world comprehensible?‘ on March 15th, 2017).  In other words, groups of neurons should be showing electrical activity that is in phase with other groups to form large networks.  Some scientists believe that the size of the network was indicative of the level of your consciousness.  However, scientists in Toronto led by Jose Luis Perez-Velazquez, have suggested that it is not the size of the network that is linked to consciousness but the number of ways that a particular degree of connectivity can be achieved.  This begins to sound like the entropy of your neurons.

In 1948 Claude Shannon, an American electrical engineer, stated that ‘information must be considered as a negative term in the entropy of the system; in short, information is negentropy‘. We can extend this idea to the concept that the entropy associated with information becomes lower as it is arranged, or ordered, into knowledge frameworks, e.g. laws and principles, that allow us to explain phenomena or behaviour.

Perhaps these ideas about entropy of information and neurons are connected; because when you have mastered a knowledge framework for a topic, such as the laws of thermodynamics, you need to deploy a small number of neurons to understand new information associated with that topic.  However, when you are presented with unfamiliar situations then you need to fire multiple networks of neurons and try out millions of ways of connecting them, in order to understand the unfamiliar data being supplied by your senses.

For diverse posts on entropy see: ‘Entropy in poetry‘ on June 1st, 2016; ‘Entropy management for bees and flights‘ on November 5th, 2014; and ‘More on white dwarfs and existentialism‘ on November 16th, 2016.

Sources:

Ali Smith, Autumn, Penguin Books, 2017

Consciousness is tied to ‘entropy’, say researchers, Physics World, October 16th, 2016.

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.

How many repeats do we need?

This is a question that both my undergraduate students and a group of taught post-graduates have struggled with this month.  In thermodynamics, my undergraduate students were estimating absolute zero in degrees Celsius using a simple manometer and a digital thermometer (this is an experiment from my MOOC: Energy – Thermodynamics in Everyday Life).  They needed to know how many times to repeat the experiment in order to determine whether their result was significantly different to the theoretical value: -273 degrees Celsius [see my post entitled ‘Arbitrary zero‘ on February 13th, 2013 and ‘Beyond  zero‘ the following week]. Meanwhile, the post-graduate students were measuring the strain distribution in a metal plate with a central hole that was loaded in tension. They needed to know how many times to repeat the experiment to obtain meaningful results that would allow a decision to be made about the validity of their computer simulation of the experiment [see my post entitled ‘Getting smarter‘ on June 21st, 2017].

The simple answer is six repeats are needed if you want 98% confidence in the conclusion and you are happy to accept that the margin of error and the standard deviation of your sample are equal.  The latter implies that error bars of the mean plus and minus one standard deviation are also 98% confidence limits, which is often convenient.  Not surprisingly, only a few undergraduate students figured that out and repeated their experiment six times; and the post-graduates pooled their data to give them a large enough sample size.

The justification for this answer lies in an equation that relates the number in a sample, n to the margin of error, MOE, the standard deviation of the sample, σ, and the shape of the normal distribution described by the z-score or z-statistic, z*: The margin of error, MOE, is the maximum expected difference between the true value of a parameter and the sample estimate of the parameter which is usually the mean of the sample.  While the standard deviation, σ,  describes the difference between the data values in the sample and the mean value of the sample, μ.  If we don’t know one of these quantities then we can simplify the equation by assuming that they are equal; and then n ≥ (z*)².

The z-statistic is the number of standard deviations from the mean that a data value lies, i.e, the distance from the mean in a Normal distribution, as shown in the graphic [for more on the Normal distribution, see my post entitled ‘Uncertainty about Bayesian methods‘ on June 7th, 2017].  We can specify its value so that the interval defined by its positive and negative value contains 98% of the distribution.  The values of z for 90%, 95%, 98% and 99% are shown in the table in the graphic with corresponding values of (z*)², which are equivalent to minimum values of the sample size, n (the number of repeats).

Confidence limits are defined as: but when n = , this simplifies to μ ± σ.  So, with a sample size of six (6 = n   for 98% confidence) we can state with 98% confidence that there is no significant difference between our mean estimate and the theoretical value of absolute zero when that difference is less than the standard deviation of our six estimates.

BTW –  the apparatus for the thermodynamics experiments costs less than £10.  The instruction sheet is available here – it is not quite an Everyday Engineering Example but the experiment is designed to be performed in your kitchen rather than a laboratory.