Tag Archives: John Kay

Puzzles and mysteries

Detail from abstract by Zahrah ReshPuzzles and mysteries are a pair of words that have taken on a whole new meaning for me since reading John Kay’s and Mervyn King’s book called ‘Radical uncertainty: decision-making for an unknowable future‘ during the summer vacation [see ‘Where is AI on the hype curve?‘ on August 12th, 2020]. They describe puzzles as well-defined problems with knowable solutions; whereas mysteries are ill-defined problems, that have no objectively correct solution and are imbued with vagueness and indeterminacy.  I have written before about engineers being creative problems-solvers [see ‘Learning problem-solving skills‘ on October 24th, 2018] which leads to the question of whether we specialise in solving puzzles or mysteries, or perhaps both types of problems.  The problems that I set for students to solve for homework to refine and evaluate their knowledge of thermodynamics [see ‘Problem-solving in thermodynamics‘ on May 6th, 2015] clearly fall into the puzzle category because they are well-defined and there is a worked solution available.  Although for many students these problems might appear to be mysteries, the intention is that with greater knowledge and understanding the mysteries will be transformed into mere puzzles.  It is also true that many real-world mysteries can be transformed into puzzles by research that advances the collective knowledge and understanding of society.  Part of the purpose of an engineering education is to equip students with the skills to make this transformation from mysteries to puzzles.  At an undergraduate level we use problems that are mysteries only to the students so that success is achievable; however, at the post-graduate level we use problems that are perceived as mysteries to both the student and the professor with the intention that the professor can guide the student towards a solution.  Of course, some mysteries are intractable often because we do not know enough to define the problem sufficiently that we can even start to think about possible solutions.  These are tricky to tackle because it is unreasonable to expect a research student to solve them in limited timeframe and it is risky to offer to solve them in exchange for a research grant because you are likely to damage your reputation and prospects of future funding when you fail.  On the other hand, they are what makes research interesting and exciting.

Image: Extract from abstract by Zahrah Resh.

Where is AI on the hype curve?

I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015].  At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans.  Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans.  Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts.  Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task.  Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve.  Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place.  However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time.  I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.

The picture shows our holiday bookshelf.