Tag Archives: neurons

Thinking in straight lines is unproductive

I suspect that none of us think in straight lines.  We have random ideas that we progressively arrange into some sort of order, or forget them.  The Nobel Laureate, Herbert Simon thought that three characteristics defined creative thinking: first, the willingness to accept vaguely defined problems and gradually structure them; second, a preoccupation with problems over a considerable period of time; and, third, extensive background knowledge. The first two characteristics seem strongly connected because you need to think about an ill-defined problem over a significant period of time in order to gradually provide a structure that will allow you to create possible solutions.    We need to have random thoughts in order to generate new structures and possible solutions that might work better than those we have already tried out; so, thinking in straight lines is unlikely to be productive and instead we need intentional mind-wandering [see ‘Ideas from a balanced mind‘ on August 24th, 2016].   More complex problems will require the assembling of more components in the structure and, hence are likely to require a larger number of neurons to assemble and to take longer, i.e. to require longer and deeper thought with many random excursions [see ‘Slow deep thoughts from planet-sized brain‘ on March 25th, 2020] .

In a university curriculum it is relatively easy to deliver extensive background knowledge and perhaps we can demonstrate techniques to students, such as sketching simple diagrams [see ‘Meta-knowledge: knowledge about knowledge‘ on June 19th, 2019], so that they can gradually define vaguely posed problems; however, it is difficult to persuade students to become preoccupied with a problem since many of them are impatient for answers.  I have always found it challenging to teach creative problem-solving to undergraduate students; and, the prospect of continuing limitations on face-to-face teaching has converted this challenge into a problem requiring a creative solution in its own right.

Source:

Simon HA, Discovery, invention, and development: human creative thinking, Proc. National Academy of Sciences, USA (Physical Sciences), 80:4569-71, 1983.

Slow deep thoughts from a planet-sized brain

I overheard a clip on the radio last week in which someone was parodying the quote from Marvin, the Paranoid Android in the Hitchhiker’s Guide to the Galaxy: ‘Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.’  It set me thinking about something that I read a few months ago in Max Tegmark’s book: ‘Life 3.0 – being human in the age of artificial intelligence‘ [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  Tegmark speculates that since consciousness seems to require different parts of a system to communicate with one another and form networks or neuronal assemblies [see ‘Digital hive mind‘ on November 30th, 2016], then the thoughts of large systems will be slower by necessity.  Hence, the process of forming thoughts in a planet-sized brain will take much longer than in a normal-sized human brain.  However, the more complex assemblies that are achievable with a planet-sized brain might imply that the thoughts and experiences would be much more sophisticated, if few and far between.  Tegmark suggests that a cosmic mind with physical dimensions of a billion light-years would only have time for about ten thoughts before dark energy fragmented it into disconnected parts; however, these thoughts and associated experiences would be quite deep.

Sources:

Douglas Adams, The Hitchhiker’s Guide to the Galaxy, Penguin Random House, 2007.

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

 

Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].

Source:

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)

 

When will you be replaced by a computer?

I have written before about extending our minds by using external computing power in our mobile phones [see ‘Science fiction becomes virtual reality‘ on October 12th, 2016; and ‘Thinking out of the skull‘ on March 18th, 2015]; but, how about replacing our brain with a computer?  That’s the potential of artificial intelligence (AI); not literally replacing our brain, but at least taking over jobs that are traditionally believed to require our brain-power.  For instance, in a recent test, an AI lawyer found 95% of the loopholes in a non-disclosure agreement in 22 seconds while a group of human lawyers found only 88% in 90 minutes, according to Philip Delves Broughton in the FT last weekend.

If this sounds scary, then consider for a moment the computing power involved.  Lots of researchers are interested in simulating the brain and it has been estimated that the computing power required is around hundred peta FLOPS (FLoating point Operations Per Second), which conveniently, is equivalent to the world’s most powerful computers.  At the time of writing the world’s most powerful computer was ‘Summit‘ at the US Oak Ridge National Laboratory, which is capable of 200 petaFLOPS.  However, simulating the brain is not the same as reproducing its intelligence; and petaFLOPS are not a good measure of intelligence because while ‘Summit’ can multiply many strings of numbers together per second, it would take you and me many minutes to multiply two strings of numbers together giving us a rating of one hundredth of a FLOP or less.

So, raw computing power does not appear to equate to intelligence, instead intelligence seems to be related to our ability to network our neurons together in massive assemblies that flicker across our brain interacting with other assemblies [see ‘Digital hive mind‘ on November 30th, 2016]. We have about 100 billion neurons compared with the ‘Summit’ computer’s 9,216 CPUs (Central Processing Unit) and 27,648 GPUs (Graphic Processing Units); so, it seems unlikely that it will be able to come close to our ability to be creative or to handle unpredictable situations even accounting for the multiple cores in the CPUs.  In addition, it requires a power input of 13MW or a couple of very large wind turbines, compared to 80W for the base metabolic rate of a human of which the brain accounts for about 20%; so, its operating costs render it an uneconomic substitute for the human brain in activities that require intelligence.  Hence, while computers and robots are taking over many types of jobs, it seems likely that a core group of jobs involving creativity, unpredictability and emotional intelligence will remain for humans for the foreseeable future.

Sources:

Max Tegmark, Life 3.0 – being human in the age of artificial intelligence, Penguin Books, 2018.

Philip Delves Broughton, Doom looms over the valley, FT Weekend, 16 November/17 November 2019.

Engelfriet, Arnoud, Creating an Artificial Intelligence for NDA Evaluation (September 22, 2017). Available at SSRN: https://ssrn.com/abstract=3039353 or http://dx.doi.org/10.2139/ssrn.3039353

See also NDA Lynn at https://www.ndalynn.com/