Author Archives: Eann Patterson

Forecasts and chimpanzees throwing darts

During the coronavirus pandemic, politicians have taken to telling us that their decisions are based on the advice of their experts while the news media have bombarded us with predictions from experts.  Perhaps not unexpectedly, with the benefit of hindsight, many of these decisions and predictions appear to be have been ill-advised or inaccurate which is likely to lead to a loss of trust in both politicians and experts.  However, this is unsurprising and the reliability of experts, particularly those willing to make public pronouncements, is well-known to be dubious.  Professor Philip E. Tetlock of the University of Pennsylvania has assessed the accuracy of forecasts made by purported experts over two decades and found that they were little better than a chimpanzee throwing darts.  However, the more well-known experts seemed to be worse at forecasting [Tetlock & Gardner, 2016].  In other words, we should assign less credibility to those experts whose advice is more frequently sought by politicians or quoted in the media.  Tetlock’s research has found that the best forecasters are better at inductive reasoning, pattern detection, cognitive flexibility and open-mindedness [Mellers et al, 2015]. People with these attributes will tend not to express unambiguous opinions but instead will attempt to balance all factors in reaching a view that embraces many uncertainties.  Politicians and the media believe that we want to hear a simple message unadorned by the complications of describing reality; and, hence they avoid the best forecasters and prefer those that provide the clear but usually inaccurate message.  Perhaps that’s why engineers are rarely interviewed by the media or quoted in the press because they tend to be good at inductive reasoning, pattern detection, cognitive flexibility and are open-minded [see ‘Einstein and public engagement‘ on August 8th, 2018].  Of course, this was well-known to the Chinese philosopher, Lao Tzu who is reported to have said: ‘Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.’

References:

Mellers, B., Stone, E., Atanasov, P., Rohrbaugh, N., Metz, S.E., Ungar, L., Bishop, M.M., Horowitz, M., Merkle, E. and Tetlock, P., 2015. The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of experimental psychology: applied, 21(1):1-14.

Tetlock, P.E. and Gardner, D., 2016. Superforecasting: The art and science of prediction. London: Penguin Random House.

Tacit hurdle to digital twins

Tacit knowledge is traditionally defined as knowledge that is not explicit or that is difficult to express or transfer from someone else.  This description of what it is not makes the definition itself tacit knowledge which is not very helpful.  Management guides resolve this by giving examples, such as aesthetic sense, or innovation and leadership skills which are elusive skills that are hard to explain [see ‘Innovation out of chaos‘ on June 29th 2016 and  ‘Clueless on leadership style‘ on June 14th, 2017].  In engineering, there are a series of skills that are hard to explain or teach, including creative problem-solving [see ‘Learning problem-solving skills‘  on October 24th, 2018], artful design [see ‘Skilled in ingenuity‘ on August 19th, 2015] and elegant modelling [see ‘Credibility is in the eye of the beholder‘ on April 20th, 2016].  In a university course we attempt to lay the foundations for this tacit engineering knowledge; however, much of it is gained in work through experience and becomes regarded by organisations as part of their intellectual assets – the core of their competitiveness and source of their sustainable technology advantage.  In our work on integrated nuclear digital environments, from which digital twins can be spawned, we would like to capture both explicit and tacit knowledge about complex systems throughout their life cycle which will extend beyond the working lives of their designers, builders and operators.  One of the potential advantages of digital twins is as a knowledge management system by duplicating the life of the physical system and thus allowing its safer and cheaper operation in the long-term as well as its eventual decommissioning.   However, besides the very nature of tacit knowledge that makes its capture difficult, we are finding that its perceived value as an intellectual asset renders stakeholders reluctant to discuss it with us; never mind consider how it might be preserved as part of a digital twin.  Research has shown that tacit knowledge sharing is influenced by environmental factors including national culture, leadership characteristics and social networks [Cai et al, 2020].  I suspect that all of these factors were present in the heyday of the UK civil nuclear power industry when it worked together to construct advanced and complex systems; however, it has not built a power station since 1995 and, at the moment, new power stations are cancelled more often than built, which has almost certainly depressed all of these factors.  So, perhaps we should not be surprised by the difficulties encountered in establishing an integrated nuclear digital environment despite its importance for the future of the industry.

Reference: Cai, Y., Song, Y., Xiao, X. and Shi, W., 2020. The Effect of Social Capital on Tacit Knowledge-Sharing Intention: The Mediating Role of Employee Vigor. SAGE Open, 10(3), p.2158244020945722.

A daily routine

I have been writing a weekly post for this blog since January 2013.  That’s more than 400 posts, which I thought sounded pretty impressive until I read about the Gentle Author who has been publishing daily since 2010 on spitalfieldslife.com.  That’s more than 4000 stories; so, I am not prolific by comparison.  And, the Gentle Author has promised to post 10,000 pieces which apparently will take until 2037.  I am unsure whether I will still be writing a weekly post in 2037 or even 2027; but, I plan to carry on for the moment.  Last week I read about another daily routine that has been sustained for nearly 40 years by Nancy Floyd.  She has been taking a daily photograph of herself since 1982 and plans to continue to her deathbed.  Her self-portrait series is available on her website and was recently featured in the FT Weekend magazine on August 8/9, 2020.  On the one hand, I am in awe of people who have the self-discipline to maintain such a daily activity; while on the other hand, I feel that there is too much I want to do and think about to stop everyday to take time out to write a blog post or snap a self portrait.  The photograph shows a portrait of me taken by my youngest daughter earlier this month – perhaps the first in series.

Where is AI on the hype curve?

I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015].  At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans.  Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans.  Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts.  Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task.  Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve.  Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place.  However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time.  I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.

The picture shows our holiday bookshelf.