Tag Archives: AI

Going around in circles

I spent a day last month marking essays that were part of the assessment for a postgraduate module I have been teaching about engineering leadership. I use Boyatzis’s theory of self-directed learning to talk about how students can develop their leadership competences. Then, we ask the students to reflect on the leadership and ethical issues associated with one or two incidents they had experienced or observed vicariously. Most of the time we teach engineering students to make rational technical decisions based on data; so, they find it difficult to reflect on their feelings and emotions when faced with ethical and leadership dilemmas. We show them Gibbs’s cycle for reflective thinking and encourage them to use it to structure their thoughts and as a framework for their essay.  There are obvious and natural similarities between the theories of Boyatzis and Gibbs.  Of course, some students use them and some don’t. However, so far, this is an assignment for which they cannot use an essay mill or a large language model, because we ask them to write about their personal experiences and feelings; and LLMs do not understand anything, let alone feelings.

Goleman D, Boyatzis R & McKee A, The new leaders: transforming the art of leadership into the science of results, London: Sphere, 2002, p.139.

I have written previously on teaching leadership, see for example ‘Inspirational Leadership‘ on March 22nd 2017, ‘Leadership is like shepherding‘ on May 10th 2017, ‘Clueless on leadership style’ on June 14th 2017.

Ancient models and stochastic parrots

Decorative image of a parrot in the parkIn 2021 Emily Bender and her colleagues published a paper suggesting that the Large Language Models (LLMs) underpinning many Artificial Intelligence applications (AI apps) were little more than stochastic parrots.  They described LLMs as ‘a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning’.  This has fuelled the ongoing debate about the real capabilities of AI apps versus the hype from the companies trying persuade us to use them.  Most AI apps are based on statistical analysis of data as stated by Bender et al; however, there is a trend toward physics-based machine learning in which known laws of physics are combined with machine-learning algorithms trained on data sets [see for example the recent review by Meng et al, 2025].  We have been fitting data to models for a very long time.  In the fifth century BC, the Babylonians made perhaps one of the greatest breakthroughs in the history of science, when they realized that mathematical models of astronomical motion could be used to extrapolate data and make predictions.  They had been recording astronomical observations since 3400 BC and the data was all collated in cuneiform in the library at Nineveh belonging to King Ashurbanipal who ruled from 669-631 BC.  While our modern-day digital storage capacity in data centres might far exceed the clay tablets with cuneiform symbols found in Ashurbanipal’s library, it seems unlikely that they will survive five thousand years as part of the data from the Babylonians’ astronomical observations has done and still be readable.

References:

Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S., 2021, March. On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Meng C, Griesemer S, Cao D, Seo S, Liu Y. 2025. When physics meets machine learning: A survey of physics-informed machine learning. Machine Learning for Computational Science and Engineering. 1(1):20.

Wisnom, Selena, The library of ancient wisdom.  Penguin Books, 2025.

Image: Parrot in the park – free stock photo by Pixabay on Stockvault.net

Imagination is your superpower

About a year ago I wrote an update on the hype around AI [see ‘Update on position of AI on hype curve: it cannot dream’ on July 26th, 2023].  Gartner’s hype curve has a ‘peak of inflated expectations’, followed by a ‘trough of disillusionment’ then an upward ‘slope of enlightenment’ leading to a ‘plateau of productivity’ [see ‘Hype cycle’ on September 23rd 2015].  It is unclear where AI is on the hype curve.  Tech companies are still pretty excited about it and advertising is beginning to claim that all sorts of products are augmented by AI.  Maybe there is a hint of unfulfilled expectations which suggest being on the downward slope towards a trough of disillusionment; however, these analyses can really only be performed retrospectively.  It is clear that we can create algorithms capable of artificial generative intelligence which can accomplish levels of creativity similar to a human in a specific task.  However, we cannot create artificial general intelligence that can perform like a human across a wide range of tasks and achieve sentience.  Current artificial intelligence algorithms consume our words, images and decisions to replay them to us.  Shannon Vallor has suggested that AI algorithms are ‘giant mirrors made of code’ and that ‘these mirrors know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains’.  The challenge facing us is that AI will make us lazy and that we will lose the capacity to think and solve new problems creatively.  Instead of making myself a cup of coffee and sitting down to gather my thoughts and dream up a short piece for this blog, I could have put the title into ChatGPT and the task would have been done in about two minutes.  I just did and it told me that imagination is a truly powerful force that fuels creativity, innovation and problem-solving allowing us to envision new possibilities, create stories and invent technologies.  Imagination is the key to unlocking potential and driving progress.  This is remarkably similar to parts of an article in the FT newspaper on November 25, 2023 by Martin Allen Morales titled ‘We need imagination to realise the good, not just stave off the bad’.  What is missing from the ChatGPT version is the recognition that imagination is a human superpower and without it we have no hope of ever achieving anything beyond what already exists.

Sources

Becky Hogge, Through the looking glass, FT Weekend, May 29, 2024.

Martin Allen Morales, We need imagination to realise the good, not just stave off the bad, FT Weekend, November 25, 2023.

Shannon Vallor, The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking, OUP, April, 2024.

Update on position of AI on hype curve: it cannot dream

Decorative image of a flowerIt would appear that I was wrong in 2020 when I suggested that artificial intelligence was near the top of its hype curve [see ‘Where is AI on the hype curve?‘ on August 12th, 2020].  In the past few months the hype has reached new levels.  Initially, there were warnings about the imminent takeover of global society by artificial intelligence; however, recently the pendulum has swung back towards a more measured concern that the nature of many jobs will be changed by artificial intelligence with some jobs disappearing and others being created.  I believe that the bottom-line is that while terrific advances have been made with large language models, such as ChatGPT, artificial intelligence is artificial but it is not intelligent [see ‘Inducing chatbots to write nonsense‘ on February 15th, 2023].  It cannot dream.  It is not creative or inventive, largely because it is very powerful applied statistics which needs data based on what has happened or been produced already.  So, if you are involved in solving mysteries (ill-defined, vague and indeterminate problems) rather than puzzles [see ‘Puzzles and mysteries‘ on November 25th, 2020] then you are unlikely to be replaced by artificial intelligence in the foreseeable future [see ‘When will you be replaced by a computer?‘ on November 20th, 2019].  Not that you should trust my predictions of the future! [see ‘Predicting the future through holistic awareness‘ on January 6th, 2021]