Tacit knowledge is traditionally defined as knowledge that is not explicit or that is difficult to express or transfer from someone else. This description of what it is not makes the definition itself tacit knowledge which is not very helpful. Management guides resolve this by giving examples, such as aesthetic sense, or innovation and leadership skills which are elusive skills that are hard to explain [see ‘Innovation out of chaos‘ on June 29th 2016 and ‘Clueless on leadership style‘ on June 14th, 2017]. In engineering, there are a series of skills that are hard to explain or teach, including creative problem-solving [see ‘Learning problem-solving skills‘ on October 24th, 2018], artful design [see ‘Skilled in ingenuity‘ on August 19th, 2015] and elegant modelling [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016]. In a university course we attempt to lay the foundations for this tacit engineering knowledge; however, much of it is gained in work through experience and becomes regarded by organisations as part of their intellectual assets – the core of their competitiveness and source of their sustainable technology advantage. In our work on integrated nuclear digital environments, from which digital twins can be spawned, we would like to capture both explicit and tacit knowledge about complex systems throughout their life cycle which will extend beyond the working lives of their designers, builders and operators. One of the potential advantages of digital twins is as a knowledge management system by duplicating the life of the physical system and thus allowing its safer and cheaper operation in the long-term as well as its eventual decommissioning. However, besides the very nature of tacit knowledge that makes its capture difficult, we are finding that its perceived value as an intellectual asset renders stakeholders reluctant to discuss it with us; never mind consider how it might be preserved as part of a digital twin. Research has shown that tacit knowledge sharing is influenced by environmental factors including national culture, leadership characteristics and social networks [Cai et al, 2020]. I suspect that all of these factors were present in the heyday of the UK civil nuclear power industry when it worked together to construct advanced and complex systems; however, it has not built a power station since 1995 and, at the moment, new power stations are cancelled more often than built, which has almost certainly depressed all of these factors. So, perhaps we should not be surprised by the difficulties encountered in establishing an integrated nuclear digital environment despite its importance for the future of the industry.
Category Archives: Soapbox
Where is AI on the hype curve?
I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015]. At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans. Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans. Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts. Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task. Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve. Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place. However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time. I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.
The picture shows our holiday bookshelf.
Success is to have made people wriggle to another tune
Shortly before the pandemic started to have an impact in the UK, I went to our local second-hand bookshop and bought a pile of old paperbacks to read. One of them was ‘Daisy Miller and Other Stories’ by Henry James (published in 1983 as Penguin Modern Classic). The title of this post is a quote from one of the ‘other stories’, ‘The Lesson of the Master’, which was first published in 1888. ‘Success is to have made people wriggle to another tune’ is said by the successful fictional novelist, Henry St George as words of encouragement to the young novelist Paul Ovett. It struck a chord with me because I think it sums up academic life. Success in teaching is to inspire a new level of insight and way of thinking amongst our students; while, success in research is to change the way in which society, or at least a section of it, thinks or operates, i.e. to have made people wriggle to another tune.
Thinking in straight lines is unproductive
I suspect that none of us think in straight lines. We have random ideas that we progressively arrange into some sort of order, or forget them. The Nobel Laureate, Herbert Simon thought that three characteristics defined creative thinking: first, the willingness to accept vaguely defined problems and gradually structure them; second, a preoccupation with problems over a considerable period of time; and, third, extensive background knowledge. The first two characteristics seem strongly connected because you need to think about an ill-defined problem over a significant period of time in order to gradually provide a structure that will allow you to create possible solutions. We need to have random thoughts in order to generate new structures and possible solutions that might work better than those we have already tried out; so, thinking in straight lines is unlikely to be productive and instead we need intentional mind-wandering [see ‘Ideas from a balanced mind‘ on August 24th, 2016]. More complex problems will require the assembling of more components in the structure and, hence are likely to require a larger number of neurons to assemble and to take longer, i.e. to require longer and deeper thought with many random excursions [see ‘Slow deep thoughts from planet-sized brain‘ on March 25th, 2020] .
In a university curriculum it is relatively easy to deliver extensive background knowledge and perhaps we can demonstrate techniques to students, such as sketching simple diagrams [see ‘Meta-knowledge: knowledge about knowledge‘ on June 19th, 2019], so that they can gradually define vaguely posed problems; however, it is difficult to persuade students to become preoccupied with a problem since many of them are impatient for answers. I have always found it challenging to teach creative problem-solving to undergraduate students; and, the prospect of continuing limitations on face-to-face teaching has converted this challenge into a problem requiring a creative solution in its own right.
Source:
Simon HA, Discovery, invention, and development: human creative thinking, Proc. National Academy of Sciences, USA (Physical Sciences), 80:4569-71, 1983.