I suspect that artificial intelligence is somewhere near the top of the ‘Hype Curve’ [see ‘Hype cycle’ on September 23rd, 2015]. At the beginning of the year, I read Max Tegmark’s book, ‘Life 3.0 – being a human in the age of artificial intelligence’ in which he discusses the prospects for artificial general intelligence and its likely impact on life for humans. Artificial intelligence means non-biological intelligence and artificial general intelligence is the ability to accomplish any cognitive task at least as well as humans. Predictions vary about when we might develop artificial general intelligence but developments in machine learning and robotics have energised people in both science and the arts. Machine learning consists of algorithms that use training data to build a mathematical model and make predictions or decisions without being explicitly programmed for the task. Three of the books that I read while on vacation last month featured or discussed artificial intelligence which stimulated my opening remark about its position on the hype curve. Jeanette Winterson in her novel, ‘Frankissstein‘ foresees a world in which humanoid robots can be bought by mail order; while Ian McEwan in his novel, ‘Machines Like Me‘, goes back to the early 1980s and describes a world in which robots with a level of consciousness close to or equal to humans are just being introduced to the market the place. However, John Kay and Mervyn King in their recently published book, ‘Radical Uncertainty – decision-making beyond numbers‘, suggest that artificial intelligence will only ever enhance rather replace human intelligence because it will not be able to handle non-stationary ill-defined problems, i.e. problems for which there no objectively correct solution and that change with time. I think I am with Kay & King and that we will shortly slide down into the trough of the hype curve before we start to see the true potential of artificial general intelligence implemented in robots.
I stayed in Sheffield city centre a few weeks ago and walked past the standard measures in the photograph on my way to speak at a workshop. In the past, when the cutlery and tool-making industry in Sheffield was focussed around small workshops, or little mesters, as they were known, these standards would have been used to check the tools being manufactured. A few hundred years later, the range of standards in existence has extended far beyond the weights and measures where it started, and now includes standards for processes and artefacts as well as for measurements. The process of validating computational models of engineering infrastructure is moving slowly towards establishing an internationally recognised standard [see two of my earliest posts: ‘Model validation‘ on September 18th, 2012 and ‘Setting standards‘ on January 29th, 2014]. We have guidelines that recommend approaches for different parts of the validation process [see ‘Setting standards‘ on January 29th, 2014]; however, many types of computational model present significant challenges when establishing their reliability [see ‘Spatial-temporal models of protein structures‘ on March 27th, 2019]. Under the auspices of the MOTIVATE project, we are gathering experts in Zurich on November 5th, 2019 to discuss the challenges of validating multi-physics models, establishing credibility and the future use of data from experiments. It is the fourth in a series of workshops held previously in Shanghai, London and Munich. For more information and to register follow this link. Come and join our discussions in one of my favourite cities where we will be following ‘In Einstein’s footprints‘ [posted on February 27th, 2019].
The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.
The opinions expressed in this blog post reflect only the author’s view and the Clean Sky 2 Joint Undertaking is not responsible for any use that may be made of the information it contains.
A couple of weeks ago I wrote about epistemic dependence and the idea that we need to trust experts because we are unable to verify everything ourselves as life is too short and there are too many things to think about. However, this approach exposes us to the risk of being misled and Julian Baggini has suggested that this risk is increasing with the growth of psychology, which has allowed more people to master methods of manipulating us, that has led to ‘a kind of arms race of deception in which truth is the main casualty.’ He suggests that when we are presented with new information then we should perform an epstemic triage by asking:
Is this a domain in which anyone can speak the truth?
What kind of expert is a trustworthy source of truth in that domain?
Is a particular expert to be trusted?
The deluge of information, which streams in front of our eyes when we look at the screens of our phones, computers and televisions, seems to leave most of us grasping for a hold on reality. Perhaps we should treat it all as fiction until have performed Baggini’s triage, at least on the sources of the information streams, if not also the individual items of information.
A month or so ago I gave a lecture entitled ‘Establishing FACTS (Fidelity And Credibility in Tests & Simulations)’ to the local branch of the Institution of Engineering Technology (IET). Of course my title was a play on words because the Oxford English Dictionary defines a ‘fact’ as ‘a thing that is known or proved to be true’ or ‘information used as evidence or as part of report’. One of my current research interests is how we establish predictions from simulations as evidence that can be used reliably in decision-making. This is important because simulations based on computational models have become ubiquitous in engineering for, amongst other things, design optimisation and evaluation of structural integrity. These models need to possess the appropriate level of fidelity and to be credible in the eyes of decision-makers, not just their creators. Model credibility is usually provided through validation processes using a small number of physical tests that must yield a large quantity of reliable and relevant data [see ‘Getting smarter‘ on June 21st, 2017]. Reliable and relevant data means making measurements with low levels of uncertainty under real-world conditions which is usually challenging.
These topics recur through much of my research and have found applications in aerospace engineering, nuclear engineering and biology. My lecture to the IET gave an overview of these ideas using applications from each of these fields, some of which I have described in past posts. So, I have now created a new page on this blog with a catalogue of these past posts on the theme of ‘FACTS‘. Feel free to have a browse!