Category Archives: Thermodynamics

Entropy on the brain

It was the worst of times, it was the worst of times.  Again.  That’s the things about things.  They fall apart, always have, always will, it’s in their nature.’  They are the opening three lines of Ali Smith’s novel ‘Autumn’.  Ali Smith doesn’t mention entropy but that’s what she is describing.

My first-year lecture course has progressed from the first law of thermodynamics to the second law; and so, I have been stretching the students’ brains by talking about entropy.  It’s a favourite topic of mine but many people find it difficult.  Entropy can be described as the level of disorder present in a system or the environment.  Ludwig Boltzmann derived his famous equation, S=k ln W, which can be found on his gravestone – he died in 1906.  S is entropy, k is a constant of proportionality named after Boltzmann, and W is the number of arrangements in which a system can be arranged without changing its energy content (ln means natural logarithm).  So, the more arrangements that are possible then the larger is the entropy.

By now the neurons in your brain should be firing away nicely with a good level of synchronicity (see my post entitled ‘Digital hive mind‘ on November 30th, 2016 and ‘Is the world comprehensible?‘ on March 15th, 2017).  In other words, groups of neurons should be showing electrical activity that is in phase with other groups to form large networks.  Some scientists believe that the size of the network was indicative of the level of your consciousness.  However, scientists in Toronto led by Jose Luis Perez-Velazquez, have suggested that it is not the size of the network that is linked to consciousness but the number of ways that a particular degree of connectivity can be achieved.  This begins to sound like the entropy of your neurons.

In 1948 Claude Shannon, an American electrical engineer, stated that ‘information must be considered as a negative term in the entropy of the system; in short, information is negentropy‘. We can extend this idea to the concept that the entropy associated with information becomes lower as it is arranged, or ordered, into knowledge frameworks, e.g. laws and principles, that allow us to explain phenomena or behaviour.

Perhaps these ideas about entropy of information and neurons are connected; because when you have mastered a knowledge framework for a topic, such as the laws of thermodynamics, you need to deploy a small number of neurons to understand new information associated with that topic.  However, when you are presented with unfamiliar situations then you need to fire multiple networks of neurons and try out millions of ways of connecting them, in order to understand the unfamiliar data being supplied by your senses.

For diverse posts on entropy see: ‘Entropy in poetry‘ on June 1st, 2016; ‘Entropy management for bees and flights‘ on November 5th, 2014; and ‘More on white dwarfs and existentialism‘ on November 16th, 2016.

Sources:

Ali Smith, Autumn, Penguin Books, 2017

Consciousness is tied to ‘entropy’, say researchers, Physics World, October 16th, 2016.

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.

How many repeats do we need?

This is a question that both my undergraduate students and a group of taught post-graduates have struggled with this month.  In thermodynamics, my undergraduate students were estimating absolute zero in degrees Celsius using a simple manometer and a digital thermometer (this is an experiment from my MOOC: Energy – Thermodynamics in Everyday Life).  They needed to know how many times to repeat the experiment in order to determine whether their result was significantly different to the theoretical value: -273 degrees Celsius [see my post entitled ‘Arbitrary zero‘ on February 13th, 2013 and ‘Beyond  zero‘ the following week]. Meanwhile, the post-graduate students were measuring the strain distribution in a metal plate with a central hole that was loaded in tension. They needed to know how many times to repeat the experiment to obtain meaningful results that would allow a decision to be made about the validity of their computer simulation of the experiment [see my post entitled ‘Getting smarter‘ on June 21st, 2017].

The simple answer is six repeats are needed if you want 98% confidence in the conclusion and you are happy to accept that the margin of error and the standard deviation of your sample are equal.  The latter implies that error bars of the mean plus and minus one standard deviation are also 98% confidence limits, which is often convenient.  Not surprisingly, only a few undergraduate students figured that out and repeated their experiment six times; and the post-graduates pooled their data to give them a large enough sample size.

The justification for this answer lies in an equation that relates the number in a sample, n to the margin of error, MOE, the standard deviation of the sample, σ, and the shape of the normal distribution described by the z-score or z-statistic, z*: The margin of error, MOE, is the maximum expected difference between the true value of a parameter and the sample estimate of the parameter which is usually the mean of the sample.  While the standard deviation, σ,  describes the difference between the data values in the sample and the mean value of the sample, μ.  If we don’t know one of these quantities then we can simplify the equation by assuming that they are equal; and then n ≥ (z*)².

The z-statistic is the number of standard deviations from the mean that a data value lies, i.e, the distance from the mean in a Normal distribution, as shown in the graphic [for more on the Normal distribution, see my post entitled ‘Uncertainty about Bayesian methods‘ on June 7th, 2017].  We can specify its value so that the interval defined by its positive and negative value contains 98% of the distribution.  The values of z for 90%, 95%, 98% and 99% are shown in the table in the graphic with corresponding values of (z*)², which are equivalent to minimum values of the sample size, n (the number of repeats).

Confidence limits are defined as: but when n = , this simplifies to μ ± σ.  So, with a sample size of six (6 = n   for 98% confidence) we can state with 98% confidence that there is no significant difference between our mean estimate and the theoretical value of absolute zero when that difference is less than the standard deviation of our six estimates.

BTW –  the apparatus for the thermodynamics experiments costs less than £10.  The instruction sheet is available here – it is not quite an Everyday Engineering Example but the experiment is designed to be performed in your kitchen rather than a laboratory.

Georgian interior design and efficient radiators

My lecture last week, to first year students studying thermodynamics, was about energy flows and, in particular, heat transfer.  I mentioned that, despite being called radiators, radiation from a typical central heating radiator represents less than a quarter of its heat output with rest arising from convection [see post entitled ‘On the beach‘ on July 24th, 2013 for an explanation of types of heat transfer].  This led one student to ask whether black radiators, with an emissivity of close to one, would be more efficient.  The question arises because the rate of radiative heat transfer is proportionate to the difference in the fourth power of the temperature of the radiator and its surroundings, and to the surface emissivity of the surface of the radiator.  This implies that heat will transfer more quickly from a hot radiator but also more slowly from a white radiator that has an emissivity of 0.05 compared to 1 for black surface.

Thus, a black radiator will radiator heat more quickly than a white one; but does that mean it’s more efficient?  The first law of thermodynamics demands that the nett energy input to a radiator is the same as the energy input required to raise the temperature of the space in which it is located.  Hence, the usual thermodynamic definition of efficiency, i.e. what we want divided by what we must supply, does not apply.  Instead, we usually mean the rate at which a radiator warms up a room or the size of the radiator required to heat the room.  In other words, a radiator that warms a room quickly is considered more efficient and a small radiator that achieves the same as large one is also considered efficient.  So, on this basis a black radiator will be more efficient.

Recent research by a team, at my alma mater, has shown that a rough black wall behind the radiator also increases its efficiency, especially when the radiator is located slightly away from the wall.  Perhaps, it is time for interior designers to develop a retro-Georgian look with dark walls, perhaps with sand mixed into the paint to increase surface roughness.

Sources:

Beck SMB, Grinsted SC, Blakey SG & Worden K, A novel design for panel radiators, Applied Thermal Engineering, 24:1291-1300, 2004.

Shati AKA, Blakey SG & Beck SBM, The effect of surface roughness and emissivity on radiator output, Energy and Buildings, 43:400-406, 2011.

Image details:

Verplank 2 002<br />
Working Title/Artist: Woodwork of a Room from the Colden HouseDepartment: Am. Decorative ArtsCulture/Period/Location: HB/TOA Date Code: Working Date: 1767<br />
Digital Photo File Name: DP210660.tif<br />
Online Publications Edited By Steven Paneccasio for TOAH 1/3/14

https://www.metmuseum.org/toah/works-of-art/40.127/

Getting it wrong

Filming for the MOOC Energy: Thermodynamics in Everyday Life

Last week’s post was stimulated by my realisation that I had made a mistake in a lecture [see ‘Amply sufficiency of solar energy?‘ on October 25th, 2017]. During the lecture, something triggered a doubt about a piece of information that I used in talking about the world as a thermodynamic system. It caused me to do some more research on the topic afterwards which led to the blog post.  The students know this already, because I sent an email to them as the post was published.  It was not an error that impacted on the fundamental understanding of the thermodynamic principles, which is fortunate because we are at a point in the course where students are struggling to understand and apply the principles to problems.  This is a normal process from my perspective but rather challenging and uncomfortable for many students.  They are developing creative problem-solving skills – becoming comfortable with the slow and uncertain process of creating representations and exploring the space of possible solutions [Martin & Schwartz, 2009 & 2014].  This takes extensive practice and most students want a quick fix: usually looking at a worked solution, which might induce the feeling that some thermodynamics has been understood but does nothing for problem-solving skills [see my post on ‘Meta-representational competence‘ on May 13th, 2015].

Engineers don’t like to be wrong [see my post on ‘Engineers are slow, error-prone‘ on April 29th, 2014].  The reliability of our solutions and designs is a critical ingredient in the social trust of engineering [Madhaven, 2016].  So, not getting it wrong is deeply embedded in the psyche of most engineers.  It is difficult to persuade most engineers to appear in front of a camera because we worry, not just about not getting it wrong, but about telling the whole truth.  The whole truth is often inconvenient for those that want to sensationalize issues for their own purposes, such as to sell news or gain votes, and this approach is anathema to many engineers.  The truth is also often complicated and nuanced, which can render an engineer’s explanation cognitively less attractive than a simple myth, or in other words less interesting and boring.  Unfortunately, people mainly pass on information that will cause an emotional response in the recipient, which is perhaps why engineering blogs are not as widely read as many others! [Lewandowsky et al 2012].

 

This week’s lecture was about energy flows, and heat transfer in particular; so, the following posts from the archive might be interest: ‘On the beach‘ on July 24th, 2013, ‘Noise transfer‘ on April 3rd, 2013, and ‘Stimulating students with caffeine‘ on December 17th, 2014

Sources:

Martin L & Schwartz DL, Prospective adaptation in the use of external representations, Cognition and Instruction, 27(4):370-400, 2009.

Martin L & Schwartz DL, A pragmatic perspective on visual representation and creative thinking, Visual Studies, 29(1):80-93, 2014.

Madhaven G, Think like an engineer, London: One World Publications, 2016.

Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N & Cook J, Misinformation and its correction: continued influence and successful debiasing, Psychological Science in the Public Interest, 13(3):106-131, 2012.