Tag Archives: artificial intelligence

Opportunities lost in knowledge management using digital technology

Decorative imageRegular readers of this blog will know that I occasionally feature publications from my research group.  The most recent was ‘Predicting release rates of hydrogen from stainless steel’ on September 13th, 2023 and before that ‘Label-free real-tracking of individual bacterium’ on January 25th 2023 and ‘A thermal emissions-based real-time monitoring system for in situ detection of cracks’ in ‘Seeing small changes is a big achievement’ on October 26th 2023.  The subject of these publications might seem a long way apart but they are linked by my interest in trying to measure events in the real-world and use the data to develop and validate high-fidelity digital models.  Recently, I have stretched my research interests still further through supervising a clutch of PhD students with a relatively new collaborator working in the social sciences.  Two of the students have had their first papers published by the ASME (American Society of Mechanical Engineers) and the IEEE (Institute of Electrical and Electronics Engineers).  Their papers are not directly connected but they both explore the use of published information to gain new insights on a topic.  In the first one [1], we have explored the similarities and differences between safety cases for three nuclear reactors: a pair of research reactors – one fission and one fusion reactor; and a commercial fission reactor.  We have developed a graphical representation of the safety features in the reactors and their relationships to the fundamental safety principles set out by the nuclear regulators. This has allowed us to gain a better understanding of the hazard profiles of fission and fusion reactors that could be used to create the safety case for a commercial fusion reactor.  Fundamentally, this paper is about exploiting existing knowledge and looking at it in a new way to gain fresh insights, which we did manually rather than automating the process using digital technology.  In the second paper [2], we have explored the extent to which digital technologies are being used to create, collate and curate knowledge during and beyond the life-cycle of an engineering product.  We found that these processes were happening but generally not in a holistic manner.  Consequently, opportunities were being lost through not deploying digital technology in knowledge management to undertake multiple roles simultaneously, e.g., acting as repositories, transactive memory systems (group-level knowledge sharing), communication spaces, boundary objects (contact points between multiple disciplines, systems or worlds) and non-human actors.  There are significant challenges, as well as competitive advantages and organisational value to be gained, in deploying digital technology in holistic approaches to knowledge management.  However, despite the rapid advances in machine learning and artificial intelligence [see ‘Update on position of AI on hype curve: it cannot dream’ on July 26th 2023] that will certainly accelerate and enhance knowledge management in a digital environment, a human is still required to realise the value of the knowledge and use it creatively.

References

  1. Nguyen, T., Patterson, E.A., Taylor, R.J., Tseng, Y.S. and Waldon, C., 2023. Comparative maps of safety features for fission and fusion reactors. Journal of Nuclear Engineering and Radiation Science, pp.1-24
  2. Yao, Y., Patterson, E.A. and Taylor, R.J., 2023. The Influence of Digital Technologies on Knowledge Management in Engineering: A Systematic Literature Review. IEEE Transactions on Knowledge and Data Engineering.

Update on position of AI on hype curve: it cannot dream

Decorative image of a flowerIt would appear that I was wrong in 2020 when I suggested that artificial intelligence was near the top of its hype curve [see ‘Where is AI on the hype curve?‘ on August 12th, 2020].  In the past few months the hype has reached new levels.  Initially, there were warnings about the imminent takeover of global society by artificial intelligence; however, recently the pendulum has swung back towards a more measured concern that the nature of many jobs will be changed by artificial intelligence with some jobs disappearing and others being created.  I believe that the bottom-line is that while terrific advances have been made with large language models, such as ChatGPT, artificial intelligence is artificial but it is not intelligent [see ‘Inducing chatbots to write nonsense‘ on February 15th, 2023].  It cannot dream.  It is not creative or inventive, largely because it is very powerful applied statistics which needs data based on what has happened or been produced already.  So, if you are involved in solving mysteries (ill-defined, vague and indeterminate problems) rather than puzzles [see ‘Puzzles and mysteries‘ on November 25th, 2020] then you are unlikely to be replaced by artificial intelligence in the foreseeable future [see ‘When will you be replaced by a computer?‘ on November 20th, 2019].  Not that you should trust my predictions of the future! [see ‘Predicting the future through holistic awareness‘ on January 6th, 2021]

Where is AI on the hype curve?

I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015].  At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans.  Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans.  Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts.  Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task.  Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve.  Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place.  However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time.  I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.

The picture shows our holiday bookshelf.

Four requirements for consciousness

Max Tegmark, in his book Life 3.0 – being a human in the age of artificial intelligence, has taken a different approach to defining consciousness compared to those that I have discussed previously in this blog which were based on synchronous firing of assemblies of neurons [see, for example, ‘Digital hive mind‘ on November 30, 2016 or ‘Illusion of self‘ on February 1st, 2017] and on consciousness being an accumulation of sensory experiences [Is there a real ‘you’ or’I’? on March 6th, 2019].  In his book, Tegmark discusses systems based on artificial intelligence; however, the four principles or requirements for consciousness that he identifies could be applied to natural systems: (i) Storage – the system needs substantial information-storage capacity; (ii) Processing – the system must have substantial information-processing capacity; (iii) Independence – the system has substantial independence from the rest of the world; and (iv) Integration – the system cannot consist of nearly independent parts.  The last two requirements are relatively easy to apply; however, the definition of ‘substantial’ in the first two requirements is open to interpretation which leads to discussion of the size of neuronal assembly required for consciousness and whether the 500 million in an octopus might be sufficient [see ‘Intelligent aliens?‘ on January 16th, 2019].

Source:

Max Tegmark,  Life 3.0 – being a human in the age of artificial intelligence, Penguin Books, Random House, UK, 2018.

Image: Ollie the Octopus at the Ocean Lab, (Ceridwen CC BY-SA 2.0)