Tag Archives: neuronal assemblies

Negative capability and optimal ambiguity

Decorative photograph of sculpture on Liverpool waterfront at nightHow is your negative capability?  The very term ‘negative capability’ conveys confusion and ambiguity.  It means our ability to accept uncertainty, a lack of knowledge or control.  It was coined by John Keats to describe the skill of appreciating something without fully understanding it.  It implies suspending judgment about something in order to learn more about it.  This is difficult because we have to move out of a low entropy mindset and consider how it fits in a range of possible mindsets or neuronal assemblies, which raises our psychological entropy and with it our anxiety and mental stress [see ’Psychological entropy increased by effectual leaders‘ on February 10th, 2021].  If we are able to tolerate an optimal level of ambiguity and uncertainty then we might be able to develop an appreciation of a complex system and even an ability to anticipate its behaviour without a full knowledge or understanding of it.  Our sub-conscious brain has excellent negative capabilities; for example, most of us can catch a ball without understanding, or even knowing, anything about the mechanics of its flight towards us, or we accept a ride home from a friend with no knowledge of their driving skills and no control over the vehicle.  Although, if our conscious brain knows that they crashed their car last week then it might override the sub-conscious and cause us to think again before declining the offer of a ride home.  Perhaps this is because our conscious brain tends to have less negative capability and likes to be in control.  Engineers like to talk about their intuition which is probably synonymous with their negative capability because it is their ability to appreciate and anticipate the behaviour of an engineering system without a full knowledge and understanding of it.  This intuition is usually based on experience and perhaps resides in the subconscious mind because if you ask an engineer to explain a decision or prediction based on their intuition then they will probably struggle to provide a complete and rational explanation.  They are comfortable with an optimal level of ambiguity although of course you might not be so comfortable.

Sources:

Richard Gunderman, ‘John Keats’ concept of ‘negative capability’ – or sitting in uncertainty –  is needed now more than ever’.  The Conversation, February 21st, 2021.

David Jeffery, Letter: Keats was uneasy about the pursuit of perfection.  FT Weekend, April 2nd, 2021.

Caputo JD. Truth: philosophy in transit. London: Penguin, 2013.

Psychological entropy increased by ineffectual leaders

Decorative image of a flowerYou might have wondered why I used ‘entropy’, and ‘psychological entropy’ in particular, as examples in my post on drowning in information a couple of weeks ago [‘We are drowning in information while starving for wisdom‘ on January 20th, 2021].  It was not random.  I spent some of the Christmas break catching up on my reading pile of interesting looking scientific papers and one on psychological entropy stimulated my thinking.  Psychological entropy is the concept that our brains are self-organising systems in a continual dialogue with the environment which leads to the emergence of a relatively small number of stable low-entropy states.  These states could be considered to be assemblies of neurons or patterns of thoughts, perhaps a mindset.  When we are presented with a new situation or problem to solve for which the current assembly or mindset is unsuitable then we start to generate new ideas by generating more and different assemblies of neurons in our brains.  Our responses become unpredictable as the level of entropy in our minds increases until we identify a new approach that deals effectively with the new situation and we add it to our list of available low-entropy stable states.  If the external environment is constantly changing then our brains are likely to be constantly churning through high entropy states which leads to anxiety and psychological stress.  Effective leaders can help us cope with changing environments by providing us with a narrative that our brains can use as a blueprint for developing the appropriate low-entropy state.  Raising psychological entropy by the right amount is conducive to creativity in the arts, science and leadership but too much leads to mental breakdown.

Sources:

Hirsh JB, Mar RA, Peterson JB. Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychological review. 2012 Apr;119(2):304

Handscombe RD & Patterson EA, The Entropy Vector: connecting science and business, Singapore: World Scientific Press, 2004.

Slow deep thoughts from a planet-sized brain

I overheard a clip on the radio last week in which someone was parodying the quote from Marvin, the Paranoid Android in the Hitchhiker’s Guide to the Galaxy: ‘Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.’  It set me thinking about something that I read a few months ago in Max Tegmark’s book: ‘Life 3.0 – being human in the age of artificial intelligence‘ [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  Tegmark speculates that since consciousness seems to require different parts of a system to communicate with one another and form networks or neuronal assemblies [see ‘Digital hive mind‘ on November 30th, 2016], then the thoughts of large systems will be slower by necessity.  Hence, the process of forming thoughts in a planet-sized brain will take much longer than in a normal-sized human brain.  However, the more complex assemblies that are achievable with a planet-sized brain might imply that the thoughts and experiences would be much more sophisticated, if few and far between.  Tegmark suggests that a cosmic mind with physical dimensions of a billion light-years would only have time for about ten thoughts before dark energy fragmented it into disconnected parts; however, these thoughts and associated experiences would be quite deep.

Sources:

Douglas Adams, The Hitchhiker’s Guide to the Galaxy, Penguin Random House, 2007.

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

 

Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].

Source:

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)