Tag Archives: neuronal assemblies

Limits of imagination

What’s it like being a bat?  ‘Seeing’ the world through your ears, or at least a sophisticated echo-location system. Or, what’s it like being an octopus?  With eight semi-autonomous arms that I wrote about a couple of weeks ago [see ‘Intelligent aliens?’ on January 16th, 2019]. For most of us, it’s unimaginable. Perhaps, because we are not bats or octopuses, but that seems to be dodging the issue.  Is it a consequence of our education and how we have been taught to think about science?  Most scientists have been taught to express their knowledge from a third person perspective that omits the personal point of view, i.e. our experience of science.  The philosopher, Julian Baggini has questioned the reason for this mode of expression: is it that we haven’t devised a framework for understanding the world scientifically that captures the first and third person points of view; is it that the mind will always elude scientific explanation; or is that the mind simply isn’t part of the physical world?

Our minds have as many neurons as there are stars in the galaxy, i.e. about a hundred billion, which is sufficient to create complex processes within us that we are never likely to understand or predict.  In this context, Carlo Rovelli has suggested that the ideas and images that we have of ourselves are much cruder and sketchier than the detailed complexity of what is happening within us.  So, if we struggle to describe our own consciousness, then perhaps it is not surprising that we cannot express what it is like to be a bat or an octopus.  Instead we resort to third person descriptions and justify it as being in the interests of objectivity.  But, does your imagination stretch to how much greater our understanding would be if we did know what is like to be a bat or an octopus?  And, how that might change our attitude to the ecosystem?

BTW:  I would answer yes, yes and maybe to Baggini’s three questions, although I remain open-minded on all of them.

Sources:

Baggini J, The pig that wants to be eaten and 99 other thought experiments, London: Granta Publications, 2008.

Rovelli C, Seven brief lessons on physics, London, Penguin Books. 2016.

Image: https://www.nps.gov/chis/learn/nature/townsends-bats.htm

Entropy on the brain

It was the worst of times, it was the worst of times.  Again.  That’s the things about things.  They fall apart, always have, always will, it’s in their nature.’  They are the opening three lines of Ali Smith’s novel ‘Autumn’.  Ali Smith doesn’t mention entropy but that’s what she is describing.

My first-year lecture course has progressed from the first law of thermodynamics to the second law; and so, I have been stretching the students’ brains by talking about entropy.  It’s a favourite topic of mine but many people find it difficult.  Entropy can be described as the level of disorder present in a system or the environment.  Ludwig Boltzmann derived his famous equation, S=k ln W, which can be found on his gravestone – he died in 1906.  S is entropy, k is a constant of proportionality named after Boltzmann, and W is the number of arrangements in which a system can be arranged without changing its energy content (ln means natural logarithm).  So, the more arrangements that are possible then the larger is the entropy.

By now the neurons in your brain should be firing away nicely with a good level of synchronicity (see my post entitled ‘Digital hive mind‘ on November 30th, 2016 and ‘Is the world comprehensible?‘ on March 15th, 2017).  In other words, groups of neurons should be showing electrical activity that is in phase with other groups to form large networks.  Some scientists believe that the size of the network was indicative of the level of your consciousness.  However, scientists in Toronto led by Jose Luis Perez-Velazquez, have suggested that it is not the size of the network that is linked to consciousness but the number of ways that a particular degree of connectivity can be achieved.  This begins to sound like the entropy of your neurons.

In 1948 Claude Shannon, an American electrical engineer, stated that ‘information must be considered as a negative term in the entropy of the system; in short, information is negentropy‘. We can extend this idea to the concept that the entropy associated with information becomes lower as it is arranged, or ordered, into knowledge frameworks, e.g. laws and principles, that allow us to explain phenomena or behaviour.

Perhaps these ideas about entropy of information and neurons are connected; because when you have mastered a knowledge framework for a topic, such as the laws of thermodynamics, you need to deploy a small number of neurons to understand new information associated with that topic.  However, when you are presented with unfamiliar situations then you need to fire multiple networks of neurons and try out millions of ways of connecting them, in order to understand the unfamiliar data being supplied by your senses.

For diverse posts on entropy see: ‘Entropy in poetry‘ on June 1st, 2016; ‘Entropy management for bees and flights‘ on November 5th, 2014; and ‘More on white dwarfs and existentialism‘ on November 16th, 2016.

Sources:

Ali Smith, Autumn, Penguin Books, 2017

Consciousness is tied to ‘entropy’, say researchers, Physics World, October 16th, 2016.

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.

Is the world incomprehensible?

For hundreds of years, philosophers and scientists have encouraged one another to keep their explanations of the natural world as simple as possible.  Ockham’s razor, attributed to the 14th century Franciscan friar, William of Ockham, is a well-established and much-cited philosophical principle that of two possible explanations, the simpler one is more likely to be correct.  More recently, Albert Einstein is supposed to have said: ‘everything should be made as simple as possible, but not simpler’.  I don’t think that William of Ockham and Albert Einstein were arguing that we should keep everything simple; but rather that we should not make scientific explanations more complicated than necessary.  However, do we have a strong preference for focusing on phenomena whose behaviour is sufficiently uncomplex that it can be explained by relatively simple theories and models?  In other words, to quote William Wimsatt, ‘we tend to ignore phenomena whose complexity exceeds the capability of our detection apparatus and explanatory models’.  Most of us find science hard; perhaps, this is not just about the language used by the cognoscenti to describe it [see my post on ‘Why is thermodynamics so hard?‘ on February 11th, 2015]; but, more about the complexity of the world around us.  To think about this level of complexity requires us to assemble and synchronize very large collections of neurons (100 million or more) in our brains, which is the very opposite of the repetitive formation of relatively small assemblies of neurons that Susan Greenfield has argued are associated with activities we find pleasurable [see my post entitled ‘Digital hive mind‘ on November 30th, 2016].  This might imply that thinking about complexity is not pleasurable for most us, or at least requires very significant effort, and that this explains the aesthetic appeal of simplicity.  However, as William Wimsatt has pointed out, ‘simplicity is not reflective of a metaphysical principle of nature’ but a constraint applied by us; and which, if we persist in its application, will render the world incomprehensible to us.

Sources:

William C. Wimsatt, Randomness and perceived randomness in evolutionary biology, Synthese, 43(2):287-329, 1980.

Susan Greenfield, A day in the life of the brain: the neuroscience of consciousness from dawn to dusk, Allen Lane, 2016.

Digital limits analogue future

Feet on Holiday I 1979 Henry Moore OM, CH 1898-1986 Presented by the Henry Moore Foundation 1982 http://www.tate.org.uk/art/work/P02699

Feet on Holiday I 1979 Henry Moore OM, CH 1898-1986 Presented by the Henry Moore Foundation 1982 http://www.tate.org.uk/art/work/P02699

Digital everything is trendy at the moment.  I am as guilty as everyone else: my research group is using digital cameras to monitor the displacement and deformation of structural components using a technique called digital image correlation (see my post on 256 Shades of grey on January 22nd, 2014) .  Some years ago, in a similar vein, I pioneered a technique known as ‘digital photoelasticity’ (se my post on ‘Cow bladders lead to strain measurement‘ on January 7th, 2015..  But, what do we mean by ‘digital’?  Originally it meant related to, resembling or operated by a digit or finger.  However, electronic engineers will refer us to A-to-D and D-to-A converters that transform analogue signals into digital signals and vice versa.  In this sense, digital means ‘expressed in discrete numerical form’ as opposed to analogue which means something that can vary continuously .  Digital signals are ubiquitous because computers can handle digital information easily.  Computers could be described as very, very large series of switches that can be either on or off, which allows numbers to be represented in binary.  The world’s second largest computer, Tianhe-2, which I visited in Guangzhou a couple of years ago, has about 12.4 petabytes (about 1016 bytes) of memory which compares to 100 billion (1012) neurons an average human brain.  There’s lots of tasks at which the world’s largest computers are excellent but none of them can drive a car, ride a bicycle, tutor a group of engineering students and write a blog post on the limits of digital technology all in a few hours.  Ok, we could connect specialized computers together wirelessly under the command of one supercomputer but that’s incomparable to the 1.4 kilograms of brain cells in an engineering professor’s skull doing all of this without being reprogrammed or requiring significant cooling.

So, what’s our brain got that the world latest computer hasn’t?  Well, it appears to be analogue and not digital.  Our consciousness appears to arise from assemblies of millions of neurons firing in synchrony and because each neuron can fire at an infinite number of levels, then our conscious thoughts can take on a multiplicity of forms that a digital computer can never hope to emulate because its finite number of switches have only two positions each: on and off.

I suspect that the future is not digital but analogue; we just don’t know how to get there, yet.  We need to stop counting with our digits and start thinking with our brains.