Tag Archives: artificial intelligence

Where is AI on the hype curve?

I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015].  At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans.  Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans.  Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts.  Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task.  Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve.  Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place.  However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time.  I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.

The picture shows our holiday bookshelf.

Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].

Source:

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)

 

When will you be replaced by a computer?

I have written before about extending our minds by using external computing power in our mobile phones [see ‘Science fiction becomes virtual reality‘ on October 12th, 2016; and ‘Thinking out of the skull‘ on March 18th, 2015]; but, how about replacing our brain with a computer?  That’s the potential of artificial intelligence (AI); not literally replacing our brain, but at least taking over jobs that are traditionally believed to require our brain-power.  For instance, in a recent test, an AI lawyer found 95% of the loopholes in a non-disclosure agreement in 22 seconds while a group of human lawyers found only 88% in 90 minutes, according to Philip Delves Broughton in the FT last weekend.

If this sounds scary, then consider for a moment the computing power involved.  Lots of researchers are interested in simulating the brain and it has been estimated that the computing power required is around hundred peta FLOPS (FLoating point Operations Per Second), which conveniently, is equivalent to the world’s most powerful computers.  At the time of writing the world’s most powerful computer was ‘Summit‘ at the US Oak Ridge National Laboratory, which is capable of 200 petaFLOPS.  However, simulating the brain is not the same as reproducing its intelligence; and petaFLOPS are not a good measure of intelligence because while ‘Summit’ can multiply many strings of numbers together per second, it would take you and me many minutes to multiply two strings of numbers together giving us a rating of one hundredth of a FLOP or less.

So, raw computing power does not appear to equate to intelligence, instead intelligence seems to be related to our ability to network our neurons together in massive assemblies that flicker across our brain interacting with other assemblies [see ‘Digital hive mind‘ on November 30th, 2016]. We have about 100 billion neurons compared with the ‘Summit’ computer’s 9,216 CPUs (Central Processing Unit) and 27,648 GPUs (Graphic Processing Units); so, it seems unlikely that it will be able to come close to our ability to be creative or to handle unpredictable situations even accounting for the multiple cores in the CPUs.  In addition, it requires a power input of 13MW or a couple of very large wind turbines, compared to 80W for the base metabolic rate of a human of which the brain accounts for about 20%; so, its operating costs render it an uneconomic substitute for the human brain in activities that require intelligence.  Hence, while computers and robots are taking over many types of jobs, it seems likely that a core group of jobs involving creativity, unpredictability and emotional intelligence will remain for humans for the foreseeable future.

Sources:

Max Tegmark, Life 3.0 – being human in the age of artificial intelligence, Penguin Books, 2018.

Philip Delves Broughton, Doom looms over the valley, FT Weekend, 16 November/17 November 2019.

Engelfriet, Arnoud, Creating an Artificial Intelligence for NDA Evaluation (September 22, 2017). Available at SSRN: https://ssrn.com/abstract=3039353 or http://dx.doi.org/10.2139/ssrn.3039353

See also NDA Lynn at https://www.ndalynn.com/

Engineers are slow, error-prone…

Professor Kristina Shea speaking in Munich

Professor Kristina Shea speaking in Munich

‘Engineers are slow, error-prone, biased, limited in experience and conditioned by education; and so we want to automate to increase reliability.’  This my paraphrasing of  Professor Kristina Shea speaking at a workshop in Munich last year.  At first glance it appears insulting to my profession but actually it is just classifying us with the rest of the human race.  Everybody has these attributes, at least when compared to computers.  And they are major impediments to engineers trying to design and manufacture systems that have the high reliability and low cost expected by the general public.

Professor Shea is Head of the Engineering Design and Computing Laboratory at ETH Zurich.  Her research focuses on developing computational tools that enable the design of complex engineered systems and products.  An underlying theme of her work, which she was talking about at the workshop, is automating design and fabrication processes to eliminate the limitations caused by engineers.

Actually, I quite like these limitations and perhaps they are essential because they represent the entropy or chaos that the second law of thermodynamics tells us must be created in every process.  Many people have expressed concern about the development of Artificial Intelligence (AI) capable of designing machines smarter than humans, which would quickly design even smarter machines that we could neither understand nor control.  Chaos would follow, possibly with apocalyptic consequences for human society.  To quote the British mathematician, IJ Good (1916-2009), “There would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.”  Stephen Cave in his essay ‘Rise of machines’ in the FT on March 20th, 2015, citing James Barrat  suggested that “artificial intelligence could become super-intelligence in a matter of days, as it fixes its own bugs, rewriting its software and drawing on the wealth of information now available online”.

The decisions that we make are influenced, or even constrained, by a set of core values, unstated assumptions and what we call common sense which are very difficult to express in prose never mind computer code.  So it seems likely that an ultra-intelligent machine would lack some or all of these boundary conditions with the consequences that while  ‘To err is human, to really foul things up you need a computer.’  To quote Paul R. Ehrlich.

Hence, I would like to think that there is still room for engineers to provide the creativity.  Perhaps Professor Shea is simply proposing a more sophisticated version of the out-of-skull thinking I wrote about in my post on March 18th, 2015.

Sources:

Follow the link to Kristina Shea’s slides from the workshop on International Workshop on Validation of Computational Mechanics Models.

Stephen Cave, Rise of the machines, Essay in the Financial Times on 21/22 March, 2015.

James Barrat, ‘Our Final Invention: Artificial Intelligence and the End of the Human Era‘, St Martins Griffin, 2015