Digital limits analogue future

Feet on Holiday I 1979 Henry Moore OM, CH 1898-1986 Presented by the Henry Moore Foundation 1982 http://www.tate.org.uk/art/work/P02699

Feet on Holiday I 1979 Henry Moore OM, CH 1898-1986 Presented by the Henry Moore Foundation 1982 http://www.tate.org.uk/art/work/P02699

Digital everything is trendy at the moment.  I am as guilty as everyone else: my research group is using digital cameras to monitor the displacement and deformation of structural components using a technique called digital image correlation (see my post on 256 Shades of grey on January 22nd, 2014) .  Some years ago, in a similar vein, I pioneered a technique known as ‘digital photoelasticity’ (se my post on ‘Cow bladders lead to strain measurement‘ on January 7th, 2015..  But, what do we mean by ‘digital’?  Originally it meant related to, resembling or operated by a digit or finger.  However, electronic engineers will refer us to A-to-D and D-to-A converters that transform analogue signals into digital signals and vice versa.  In this sense, digital means ‘expressed in discrete numerical form’ as opposed to analogue which means something that can vary continuously .  Digital signals are ubiquitous because computers can handle digital information easily.  Computers could be described as very, very large series of switches that can be either on or off, which allows numbers to be represented in binary.  The world’s second largest computer, Tianhe-2, which I visited in Guangzhou a couple of years ago, has about 12.4 petabytes (about 1016 bytes) of memory which compares to 100 billion (1012) neurons an average human brain.  There’s lots of tasks at which the world’s largest computers are excellent but none of them can drive a car, ride a bicycle, tutor a group of engineering students and write a blog post on the limits of digital technology all in a few hours.  Ok, we could connect specialized computers together wirelessly under the command of one supercomputer but that’s incomparable to the 1.4 kilograms of brain cells in an engineering professor’s skull doing all of this without being reprogrammed or requiring significant cooling.

So, what’s our brain got that the world latest computer hasn’t?  Well, it appears to be analogue and not digital.  Our consciousness appears to arise from assemblies of millions of neurons firing in synchrony and because each neuron can fire at an infinite number of levels, then our conscious thoughts can take on a multiplicity of forms that a digital computer can never hope to emulate because its finite number of switches have only two positions each: on and off.

I suspect that the future is not digital but analogue; we just don’t know how to get there, yet.  We need to stop counting with our digits and start thinking with our brains.

You’re all weird!

bubbleRecently, I attended a talk given by the journalist, Richard Black to a group of scientists and engineers. He used a show of hands to establish that none us read the tabloid newspapers and told us we were all weird.  He went on to discuss how we live in bubble and rarely come into contact with people outside of it.  In media terms, I live in a bubble that can be defined by BBC Radio 4 in the UK or NPR in USA.  And, I was surprised how easy it was to substitute NPR for BBC Radio 4 when we moved to the USA, so the bubble extends across national boundaries.  But, nevertheless we live in a relatively small bubble because the tabloid papers are read by millions of people whereas the serious papers I prefer have only hundreds of thousands of readers. That’s why we are weird – we’re unusual.  It’s also why we are surprised when electorates make apparently irrational decisions.  However, they are only irrational to weird people who have access to the information and analysis available in our bubble.

We should not blame the media because most are simply businesses whose bottom line is profit, which means they have to be attractive to as many people as possible and there aren’t many people in our bubble, so most of the media doesn’t target us.   The same logic applies to politicians who want to be elected.

Interestingly, ‘weird’ is a late middle-English word, originally meaning ‘having the power to control destiny’.  So maybe being weird is a good thing?

Illusion of self

A few weeks ago, I wrote that some neuroscientists believe consciousness arises from the synchronous firing of assemblies of neurons [see my post ‘Digital hive mind‘ on November 30th, 2016].  Since these assemblies exist for only a fraction of a second before triggering other ones that replace them, this implies that what you think of as ‘yourself’ is actually a continuously changing collection of connected neurons in your brain, or as VS Ramachandran has described it ‘what drives us is not a self – but a hodgepodge of processes inside the skull’.

According to Kegan’s schema of cognitive development, new born babies perceive the world as an extension of themselves.  However, as our consciousness develops, the idea of a ‘self’ evolves as a construct of the brain that allows us to handle the huge flow of sensory inputs arriving from our five senses and we begin to separate ‘self’ from the objects around us.  This leads to us perceiving the world around us as separate to us but there to serve our needs, which we see as paramount.  Fortunately, the vast majority of us (more than 90%) move beyond this state and our relationships with other people become the dominant driver of our actions and identity.  Some people (about 35%) can separate their relationships and identity from ‘self’ and hence are capable of more nuanced decision-making – this is known as the Institutional stage. About one percent of the population are capable holding many identities and handling the paradoxes that arise from deconstructing the ‘self’ in the Inter-individual stage.

Of course, Kegan’s stages of cognitive development are also a construct to helps us describe and understand the behaviour and levels of cognition observed in those around us.  There is some evidence that deeper more complex thought processes, associated with higher levels of cognition, involve the firing of larger, more widespread assemblies of neurons across the brain; and perhaps these larger neuronal assemblies are self-reinforcing; in other words, the more we think deeply the more capable we are of thinking deeply and, just occasionally, this leads to an original thought.  And, maybe the one percent of individuals who are capable of handling paradoxical thoughts have brains capable of sustaining multiple large neuronal assemblies.  A little bit like lightning triggered from multiple points in the sky during a (brain)storm.

How does this relate to engineering?  Well, we touch on Kegan’s stages of cognitive development in our continuing professional development courses [see my post on ‘Technology Leadership’ on January 18th, 2017] for engineers and scientists aspiring to become leaders in research and development because we want to advance their cognitive development and, also allow them to lead teams consisting of individuals at the institutional and inter-individual stages that will be capable of making major breakthroughs.

Sources:

V.S. Ramachandran, ‘In the hall of illusions’, in ‘We are all stardust‘ by Stefan Klein, London: Scribe, 2015.

Kegan, R., In over our heads: the mental demands of modern life, Cambridge, MA: Harvard University Press, 1994.

Kegan, R., The evolving self: problem and process in human development, Cambridge, MA: Harvard University Press, 1982.

Did cubism inspire engineering analysis?

Bottle and Fishes c.1910-2 Georges Braque 1882-1963 Purchased 1961 http://www.tate.org.uk/art/work/T00445

Bottle and Fishes c.1910-2 Georges Braque 1882-1963 Purchased 1961 http://www.tate.org.uk/art/work/T00445

A few weeks ago we went to the Tate Liverpool with some friends who were visiting from out of town. It was my second visit to the gallery in as many months and I was reminded that on the previous visit I had thought about writing a post on a painting called ‘Bottle and Fishes’ by the French artist, Georges Braque.  It’s an early cubist painting – the style was developed by Picasso and Braque at the beginning of the last century.  The art critic, Louis Vauxcelles coined the term ‘cubism’ on seeing some of Braque’s paintings in 1908 and describing them as reducing everything to ‘geometric outlines, to cubes’.  It set me thinking about how long it took the engineering world to catch on to the idea of reducing objects, or components and structures, to geometric outlines and then into cubes.  This is the basis of finite element analysis, which was not invented until about fifty years after cubism, but is now ubiquitous in engineering design as the principal method of calculating deformation and stresses in components and structures.  An engineer can calculate the stresses in a simple cube with a pencil and paper, so dividing a structure into a myriad of cubes renders its analysis relatively straightforward but very tedious.  Of course, a computer removes the tedium and allows us to analyse complex structures relatively quickly and reliably.

So, why did it take engineers fifty years to apply cubism?  Well, we needed computers sufficiently powerful to make it worthwhile and they only became available after the Second War World due to the efforts of Turing and his peers.  At least, that’s our excuse!  Nowadays the application of finite element analysis extends beyond stress fields to many field variables, including heat, fluid flow and magnetic fields.