Tag Archives: artificial general intelligence

Imagination is your superpower

About a year ago I wrote an update on the hype around AI [see ‘Update on position of AI on hype curve: it cannot dream’ on July 26th, 2023].  Gartner’s hype curve has a ‘peak of inflated expectations’, followed by a ‘trough of disillusionment’ then an upward ‘slope of enlightenment’ leading to a ‘plateau of productivity’ [see ‘Hype cycle’ on September 23rd 2015].  It is unclear where AI is on the hype curve.  Tech companies are still pretty excited about it and advertising is beginning to claim that all sorts of products are augmented by AI.  Maybe there is a hint of unfulfilled expectations which suggest being on the downward slope towards a trough of disillusionment; however, these analyses can really only be performed retrospectively.  It is clear that we can create algorithms capable of artificial generative intelligence which can accomplish levels of creativity similar to a human in a specific task.  However, we cannot create artificial general intelligence that can perform like a human across a wide range of tasks and achieve sentience.  Current artificial intelligence algorithms consume our words, images and decisions to replay them to us.  Shannon Vallor has suggested that AI algorithms are ‘giant mirrors made of code’ and that ‘these mirrors know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains’.  The challenge facing us is that AI will make us lazy and that we will lose the capacity to think and solve new problems creatively.  Instead of making myself a cup of coffee and sitting down to gather my thoughts and dream up a short piece for this blog, I could have put the title into ChatGPT and the task would have been done in about two minutes.  I just did and it told me that imagination is a truly powerful force that fuels creativity, innovation and problem-solving allowing us to envision new possibilities, create stories and invent technologies.  Imagination is the key to unlocking potential and driving progress.  This is remarkably similar to parts of an article in the FT newspaper on November 25, 2023 by Martin Allen Morales titled ‘We need imagination to realise the good, not just stave off the bad’.  What is missing from the ChatGPT version is the recognition that imagination is a human superpower and without it we have no hope of ever achieving anything beyond what already exists.

Sources

Becky Hogge, Through the looking glass, FT Weekend, May 29, 2024.

Martin Allen Morales, We need imagination to realise the good, not just stave off the bad, FT Weekend, November 25, 2023.

Shannon Vallor, The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking, OUP, April, 2024.

Update on position of AI on hype curve: it cannot dream

Decorative image of a flowerIt would appear that I was wrong in 2020 when I suggested that artificial intelligence was near the top of its hype curve [see ‘Where is AI on the hype curve?‘ on August 12th, 2020].  In the past few months the hype has reached new levels.  Initially, there were warnings about the imminent takeover of global society by artificial intelligence; however, recently the pendulum has swung back towards a more measured concern that the nature of many jobs will be changed by artificial intelligence with some jobs disappearing and others being created.  I believe that the bottom-line is that while terrific advances have been made with large language models, such as ChatGPT, artificial intelligence is artificial but it is not intelligent [see ‘Inducing chatbots to write nonsense‘ on February 15th, 2023].  It cannot dream.  It is not creative or inventive, largely because it is very powerful applied statistics which needs data based on what has happened or been produced already.  So, if you are involved in solving mysteries (ill-defined, vague and indeterminate problems) rather than puzzles [see ‘Puzzles and mysteries‘ on November 25th, 2020] then you are unlikely to be replaced by artificial intelligence in the foreseeable future [see ‘When will you be replaced by a computer?‘ on November 20th, 2019].  Not that you should trust my predictions of the future! [see ‘Predicting the future through holistic awareness‘ on January 6th, 2021]

Somethings will always be unknown

Decorative image of a fruit fly nervous system Albert Cardona HHMI Janelia Research Campus Welcome Image Awards 2015The philosophy of science has oscillated between believing that everything is knowable and that somethings will always be unknowable. In 1872, the German physiologist, Emil du Bois-Reymond declared ‘we do not know and will not know’ implying that there would always be limits to our scientific knowledge. Thirty years later, David Hilbert, a German mathematician stated that nothing is unknowable in the natural sciences. He believed that by considering some things to be unknowable we limited our ability to know. However, Kurt Godel, a Viennese mathematician who moved to Princeton in 1940, demonstrated in his incompleteness theorems that for any finite mathematical system there will always be statements which are true but unprovable and that a finite mathematical system cannot demonstrate its own consistency. I think that this implies some things will remain unknowable or at least uncertain. Godel believed that his theorems implied that the power of the human mind is infinitely more powerful than any finite machine and Roger Penrose has deployed these incompleteness theorems to argue that consciousness transcends the formal logic of computers, which perhaps implies that artificial intelligence will never replace human intelligence [see ‘Four requirements for consciousness‘ on January 22nd, 2020].  At a more mundane level, Godel’s theorems imply that engineers will always have to deal with the unknowable when using mathematical models to predict the behaviour of complex systems and, of course, to avoid meta-ignorance, we have to assume that there are always unknown unknowns [see ‘Deep uncertainty and meta-ignorance‘ on July 21st, 2021].

Source: Book review by Nick Stephen, ‘Journey to the Edge of Reason by Stephen Budiansky – ruthless logic‘ FT Weekend, 1st June 2021.

Where is AI on the hype curve?

I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015].  At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans.  Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans.  Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts.  Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task.  Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve.  Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place.  However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time.  I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.

The picture shows our holiday bookshelf.