Tag Archives: artificial intelligence

Beyond language with stochastic parrots

Decorative image of a summer flowerSome months ago, I wrote in unflattering terms about artificial intelligence applications (AI apps) and large language models (LLMs), (see ‘Ancient models and stochastic parrots‘ on October 1st, 2025).  My view is changing, probably as AI apps develop and my user skills improve.  I have started using a couple of different free AI apps as research assistants in three ways.  First, when I am writing administrative documents, such as a job description for a Coordinator of AI in Education, for which a job title was sufficient for the app to generate a first draft that only required light editing and tailoring to the specific context.  Second, using a different AI app, to answer questions about phenomena which have allowed me to construct explanations for observations made of new and, or, complex systems – I could have delved into textbooks and monographs or searched research articles but AI does this much more quickly.  The third way I have used AI apps is to identify gaps in knowledge that could be fruitful topics for research.  This is a more difficult task because AI apps only know about stuff they can find on the internet in the form of language or text.  Hence, I have to ask questions with answers that reveal something unknown or not understood.  This is not straightforward because LLMs are fundamentally constrained by language.  In ‘The Years’, Annie Ernaux wrote that ‘language will continue to put the world into words’.  Yann LeCun, Meta’s former chief scientist, has suggested that to understand how the world works, a model would need to learn from videos and spatial data, not just language, and that without this type of learning human-level intelligence is impossible.  He has set up a new company, Advanced Machine Intelligence Labs, to do just that.  Language is used by people to describe the world from their perspective which might be inaccurate, incomplete or distorted and that can mislead LLMs.  However, using AI apps we can also ‘distort’ videos of the world, so that machine intelligence will have to be based on direct observation of the real-world, which after all is the approach that science attempts to use.

Source:

Yann LeCun, Intelligence is really about learning. FT Weekend, 3-4 January 2026

Annie Ernaux, The Years, Fitzcarraldo Editions, London, 2018.

Is the autonomous individual ceasing to exist?

Society consists of a series of bubbles.  A century or so ago, your bubble was largely defined by where you lived, your village or neighbourhood, because few people travelled any significant distance and you probably knew everyone living around you.  A decade or so ago, your bubble was probably defined by the newspaper you read or the radio/TV channels you preferred [see ‘You’re all weird!’ on February 8th, 2017]. Today social media defines bubbles that are geographically widely-dispersed.  This both fractures local communities and gives a global reach to influencers on social media.  Some social media ‘dictates what you shall think, it creates an ideology for you, it tries to govern your emotional life’.  The quote is from George Orwell’s 1941 essay, Literature and Totalitarianism.  He goes on ‘And as far as possible it isolates you from the outside world, it shuts you up in an artificial universe in which you have no standards of comparison.’  Of course, he is writing about totalitarianism not social media but his words seem sinisterly appropriate to the apparent intention of some social media influencers and platforms that promote alternative narratives which are not consistent with reality.  Orwell suggested that if totalitarianism becomes world-wide and permanent then literature, the truthful expression of what one person thinks and feels, could not survive.  Despite Orwell’s fear that he was living ‘in an age in which the autonomous individual is ceasing to exist’, totalitarianism did not abolish freedom of thought in the 1940s.  Now in the 2020s, we have to ensure that social media does not become a modern instrument of totalitarianism, suffocating freedom of thought, isolating large sections of society from reality, dictating ideology and governing emotional life. We need to think for ourselves and encourage others to do the same.  In their book, ‘Radical Uncertainty – Decision-making for an Unknowable Future‘, John Kay and Mervyn King repeatedly ask ‘What is going on here?’ as a device for thinking about and reviewing the evidence before reaching a conclusion.  It is a simple device that we could all usefully deploy in 2025. Happy New Year!

Sources:

George Orwell, Literature and Totalitarianism, 1941 available at https://hackneybooks.co.uk/books/64/1006/LiteratureAndTotalitarianism.html

Nazanin Zaghari-Ratcliffe, The feeling of freedom, FT Weekend, 7th & 8th December 2024.

John Kay and Mervyn King, Radical Uncertainty – Decision-making for an Unknowable Future, Little Brown Book Group, 2020.

Imagination is your superpower

About a year ago I wrote an update on the hype around AI [see ‘Update on position of AI on hype curve: it cannot dream’ on July 26th, 2023].  Gartner’s hype curve has a ‘peak of inflated expectations’, followed by a ‘trough of disillusionment’ then an upward ‘slope of enlightenment’ leading to a ‘plateau of productivity’ [see ‘Hype cycle’ on September 23rd 2015].  It is unclear where AI is on the hype curve.  Tech companies are still pretty excited about it and advertising is beginning to claim that all sorts of products are augmented by AI.  Maybe there is a hint of unfulfilled expectations which suggest being on the downward slope towards a trough of disillusionment; however, these analyses can really only be performed retrospectively.  It is clear that we can create algorithms capable of artificial generative intelligence which can accomplish levels of creativity similar to a human in a specific task.  However, we cannot create artificial general intelligence that can perform like a human across a wide range of tasks and achieve sentience.  Current artificial intelligence algorithms consume our words, images and decisions to replay them to us.  Shannon Vallor has suggested that AI algorithms are ‘giant mirrors made of code’ and that ‘these mirrors know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains’.  The challenge facing us is that AI will make us lazy and that we will lose the capacity to think and solve new problems creatively.  Instead of making myself a cup of coffee and sitting down to gather my thoughts and dream up a short piece for this blog, I could have put the title into ChatGPT and the task would have been done in about two minutes.  I just did and it told me that imagination is a truly powerful force that fuels creativity, innovation and problem-solving allowing us to envision new possibilities, create stories and invent technologies.  Imagination is the key to unlocking potential and driving progress.  This is remarkably similar to parts of an article in the FT newspaper on November 25, 2023 by Martin Allen Morales titled ‘We need imagination to realise the good, not just stave off the bad’.  What is missing from the ChatGPT version is the recognition that imagination is a human superpower and without it we have no hope of ever achieving anything beyond what already exists.

Sources

Becky Hogge, Through the looking glass, FT Weekend, May 29, 2024.

Martin Allen Morales, We need imagination to realise the good, not just stave off the bad, FT Weekend, November 25, 2023.

Shannon Vallor, The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking, OUP, April, 2024.

Machine learning weather forecasts and black swan events

Decorative painting of a stormy seascapeA couple of weeks ago I read about Google’s new weather forecasting algorithm, GraphCast.  It takes a radical new approach to forecasting by using machine learning rather than modelling the weather using the laws of physics [see ‘Storm in a computer‘ on November 16th, 2022].  GraphCast uses a graph neural network that has been trained on 39 years (1979 -2017) of historical data from the European Centre for Medium-Range Weather Forecasts (ECMWF). It requires two inputs: the current state of the weather and the state six hours ago; then it predicts the weather six hours ahead with a 0.25 degree latitude-longitude resolution (about 17 miles) at 38 vertical levels.  This compares to ECMWF’s high resolution forecasts which have 0.1 degree resolution (about 7 miles), 137 levels and 1 hour timesteps.  Although the training of the neural network took about four weeks on 32 Cloud TPU v4 devices (Tensor Processing Units), the forecast requires less than a minute on a single device whereas the ECMWF’s high resolution forecast requires a couple of hours on a supercomputer.  Within a day or so of reading about GraphCast, we watched ‘The Day After Tomorrow’, a movie in which a superstorm suddenly plunges the entire northern hemisphere into an ice age with dramatic consequences.  Part of the movie’s message is that humanity’s disregard for the state of the planet could lead to existential consequences.  It occurred to me that the traditional approach to weather forecasting using the laws of physics might predict the onset of such a superstorm and avoid it becoming a black swan event; however, it is very unlikely forecasts based on machine learning would predict it because there is nothing like it in the historical record used to train the neural network.  So for the moment we should continue to use the laws of physics to model and predict the weather since climate change appears to be making superstorms more likely [see ‘More violent storms‘ on March 1st 2017].

Sources:

Blum A, The weather forecast may show AI storms ahead, FT Weekend, 18/19 November 2023.

Lam R, Sanchez-Gonzalez A, Willson M, Wirnsberger P, Fortunato M, Alet F, Ravuri S, Ewalds T, Eaton-Rosen Z, Hu W, Merose A. Learning skillful medium-range global weather forecasting. Science. 10.1126/science.adi2336, 2023.

Image: Painting by Sarah Evans owned by the author.