Tag Archives: brain

Do you believe in an afterlife?

‘I believe that energy can’t be destroyed, it can only be changed from one form to another.  There’s more to life than we can conceive of.’ The quote is from the singer and songwriter, Corinne Bailey Rae’s answer to the question: do you believe in an afterlife? [see Inventory in the FT Magazine, October 26/27 2019].  However, the first part of her answer is the first law of thermodynamics while the second part resonates with Erwin Schrödinger’s view on life and consciousness [see ‘Digital hive mind‘ on November 30th, 2016]. The garden writer and broadcaster, Monty Don gave a similar answer to the same question: ‘Absolutely.  I believe that the energy lives on and is connected to place.  I do have this idea of re-joining all of my past dogs and family on a summer’s day, like a Stanley Spencer painting.’ [see Inventory in the FT Magazine, January 18/19 2020].  The boundary between energy and mass is blurry because matter is constructed from atoms and atoms from sub-atomic particles, such as electrons that can behave as particles or waves of energy [see ‘More uncertainty about matter and energy‘ on August 3rd 2016].  Hence, the concept that after death our body reverts to a cloud of energy as the complex molecules of our anatomy are broken down into elemental particles is completely consistent with modern physics.  However, I suspect Rae and Don were going further and suggesting that our consciousness lives on in some form. Perhaps through some kind of unified mind that Schrödinger thought might exist as a consequence of our individual minds networking together to create emergent behaviour.  Schrödinger found it utterly impossible to form an idea about how this might happen and it seems unlikely that an individual mind could ever do so; however, perhaps the more percipient amongst us occasionally gets a hint of the existence of something beyond our individual consciousness.

Reference: Erwin Schrodinger, What is life? with Mind and Matter and Autobiographical Sketches, Cambridge University Press, 1992.

Image: ‘Sunflower and dog worship’ by Stanley Spencer, 1937 @ https://www.bbc.co.uk/news/entertainment-arts-13789029

Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].

Source:

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)

 

When will you be replaced by a computer?

I have written before about extending our minds by using external computing power in our mobile phones [see ‘Science fiction becomes virtual reality‘ on October 12th, 2016; and ‘Thinking out of the skull‘ on March 18th, 2015]; but, how about replacing our brain with a computer?  That’s the potential of artificial intelligence (AI); not literally replacing our brain, but at least taking over jobs that are traditionally believed to require our brain-power.  For instance, in a recent test, an AI lawyer found 95% of the loopholes in a non-disclosure agreement in 22 seconds while a group of human lawyers found only 88% in 90 minutes, according to Philip Delves Broughton in the FT last weekend.

If this sounds scary, then consider for a moment the computing power involved.  Lots of researchers are interested in simulating the brain and it has been estimated that the computing power required is around hundred peta FLOPS (FLoating point Operations Per Second), which conveniently, is equivalent to the world’s most powerful computers.  At the time of writing the world’s most powerful computer was ‘Summit‘ at the US Oak Ridge National Laboratory, which is capable of 200 petaFLOPS.  However, simulating the brain is not the same as reproducing its intelligence; and petaFLOPS are not a good measure of intelligence because while ‘Summit’ can multiply many strings of numbers together per second, it would take you and me many minutes to multiply two strings of numbers together giving us a rating of one hundredth of a FLOP or less.

So, raw computing power does not appear to equate to intelligence, instead intelligence seems to be related to our ability to network our neurons together in massive assemblies that flicker across our brain interacting with other assemblies [see ‘Digital hive mind‘ on November 30th, 2016]. We have about 100 billion neurons compared with the ‘Summit’ computer’s 9,216 CPUs (Central Processing Unit) and 27,648 GPUs (Graphic Processing Units); so, it seems unlikely that it will be able to come close to our ability to be creative or to handle unpredictable situations even accounting for the multiple cores in the CPUs.  In addition, it requires a power input of 13MW or a couple of very large wind turbines, compared to 80W for the base metabolic rate of a human of which the brain accounts for about 20%; so, its operating costs render it an uneconomic substitute for the human brain in activities that require intelligence.  Hence, while computers and robots are taking over many types of jobs, it seems likely that a core group of jobs involving creativity, unpredictability and emotional intelligence will remain for humans for the foreseeable future.

Sources:

Max Tegmark, Life 3.0 – being human in the age of artificial intelligence, Penguin Books, 2018.

Philip Delves Broughton, Doom looms over the valley, FT Weekend, 16 November/17 November 2019.

Engelfriet, Arnoud, Creating an Artificial Intelligence for NDA Evaluation (September 22, 2017). Available at SSRN: https://ssrn.com/abstract=3039353 or http://dx.doi.org/10.2139/ssrn.3039353

See also NDA Lynn at https://www.ndalynn.com/

Meta-knowledge: knowledge about knowledge

As engineers, we like to draw simple diagrams of the systems that we are attempting to analyse because most of us are pictorial problem-solvers and recording the key elements of a problem in a sketch helps us to identify the important issues and select an appropriate solution procedure [see ‘Meta-representational competence’ on May 13th, 2015].  Of course, these simple representations can be misleading if we omit parameters or features that dominate the behaviour of the system; so, there is considerable skill in idealising a system so that the analysis is tractable, i.e. can be solved.  Students find it especially difficult to acquire these skills [see ‘Learning problem-solving skills‘ on October 24th, 2018] and many appear to avoid drawing a meaningful sketch even when examinations marks are allocated to it [see ‘Depressed by exams‘ on January 31st, 2018].  Of course, in thermodynamics it is complicated by the entropy of the system being reduced when we omit parameters in order to idealise the system; because with fewer parameters to describe the system there are fewer microstates in which the system can exist and, hence according to Boltzmann, the entropy will be lower [see ‘Entropy on the brain‘ on November 29th, 2017].  Perhaps this is the inverse of realising that we understand less as we know more.  In other words, as our knowledge grows it reveals to us that there is more to know and understand than we can ever hope to comprehend [see ‘Expanding universe‘ on February 7th, 2018]. Is that the second law of thermodynamics at work again, creating more disorder to counter the small amount of order achieved in your brain?

Image: Sketch made during an example class