Tag Archives: neurons

Psychological entropy increased by ineffectual leaders

Decorative image of a flowerYou might have wondered why I used ‘entropy’, and ‘psychological entropy’ in particular, as examples in my post on drowning in information a couple of weeks ago [‘We are drowning in information while starving for wisdom‘ on January 20th, 2021].  It was not random.  I spent some of the Christmas break catching up on my reading pile of interesting looking scientific papers and one on psychological entropy stimulated my thinking.  Psychological entropy is the concept that our brains are self-organising systems in a continual dialogue with the environment which leads to the emergence of a relatively small number of stable low-entropy states.  These states could be considered to be assemblies of neurons or patterns of thoughts, perhaps a mindset.  When we are presented with a new situation or problem to solve for which the current assembly or mindset is unsuitable then we start to generate new ideas by generating more and different assemblies of neurons in our brains.  Our responses become unpredictable as the level of entropy in our minds increases until we identify a new approach that deals effectively with the new situation and we add it to our list of available low-entropy stable states.  If the external environment is constantly changing then our brains are likely to be constantly churning through high entropy states which leads to anxiety and psychological stress.  Effective leaders can help us cope with changing environments by providing us with a narrative that our brains can use as a blueprint for developing the appropriate low-entropy state.  Raising psychological entropy by the right amount is conducive to creativity in the arts, science and leadership but too much leads to mental breakdown.


Hirsh JB, Mar RA, Peterson JB. Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychological review. 2012 Apr;119(2):304

Handscombe RD & Patterson EA, The Entropy Vector: connecting science and business, Singapore: World Scientific Press, 2004.

Thinking in straight lines is unproductive

I suspect that none of us think in straight lines.  We have random ideas that we progressively arrange into some sort of order, or forget them.  The Nobel Laureate, Herbert Simon thought that three characteristics defined creative thinking: first, the willingness to accept vaguely defined problems and gradually structure them; second, a preoccupation with problems over a considerable period of time; and, third, extensive background knowledge. The first two characteristics seem strongly connected because you need to think about an ill-defined problem over a significant period of time in order to gradually provide a structure that will allow you to create possible solutions.    We need to have random thoughts in order to generate new structures and possible solutions that might work better than those we have already tried out; so, thinking in straight lines is unlikely to be productive and instead we need intentional mind-wandering [see ‘Ideas from a balanced mind‘ on August 24th, 2016].   More complex problems will require the assembling of more components in the structure and, hence are likely to require a larger number of neurons to assemble and to take longer, i.e. to require longer and deeper thought with many random excursions [see ‘Slow deep thoughts from planet-sized brain‘ on March 25th, 2020] .

In a university curriculum it is relatively easy to deliver extensive background knowledge and perhaps we can demonstrate techniques to students, such as sketching simple diagrams [see ‘Meta-knowledge: knowledge about knowledge‘ on June 19th, 2019], so that they can gradually define vaguely posed problems; however, it is difficult to persuade students to become preoccupied with a problem since many of them are impatient for answers.  I have always found it challenging to teach creative problem-solving to undergraduate students; and, the prospect of continuing limitations on face-to-face teaching has converted this challenge into a problem requiring a creative solution in its own right.


Simon HA, Discovery, invention, and development: human creative thinking, Proc. National Academy of Sciences, USA (Physical Sciences), 80:4569-71, 1983.

Slow deep thoughts from a planet-sized brain

I overheard a clip on the radio last week in which someone was parodying the quote from Marvin, the Paranoid Android in the Hitchhiker’s Guide to the Galaxy: ‘Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.’  It set me thinking about something that I read a few months ago in Max Tegmark’s book: ‘Life 3.0 – being human in the age of artificial intelligence‘ [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  Tegmark speculates that since consciousness seems to require different parts of a system to communicate with one another and form networks or neuronal assemblies [see ‘Digital hive mind‘ on November 30th, 2016], then the thoughts of large systems will be slower by necessity.  Hence, the process of forming thoughts in a planet-sized brain will take much longer than in a normal-sized human brain.  However, the more complex assemblies that are achievable with a planet-sized brain might imply that the thoughts and experiences would be much more sophisticated, if few and far between.  Tegmark suggests that a cosmic mind with physical dimensions of a billion light-years would only have time for about ten thoughts before dark energy fragmented it into disconnected parts; however, these thoughts and associated experiences would be quite deep.


Douglas Adams, The Hitchhiker’s Guide to the Galaxy, Penguin Random House, 2007.

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.


Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].


Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)