Digital Apes: On Humanity and AI

This review was first published in The Weekend Australian.

*

Shortly before his death in 2015 the fantasy writer Terry Pratchett agreed to be interviewed for a documentary about his life and legacy. ‘When I was a boy all I ever wanted was my own observatory’ says Pratchett in the film’s final scene. ‘I knew even then that all the mysteries of life lay hidden in the stars. Having said that, stars aren’t that important. Whereas streetlamps – they’re very important. Why? Because they’re so rare! As far as we know, there are only a few million of them in the universe. And they were built by monkeys!’

The idea that the most impressive thing about human beings is their ability to manipulate their environment through tools is one with an illustrious history. According to this emphasis, humanity is Homo faber – literally, ‘man the maker’. The French philosopher Henri Bergson (1859-1941) saw human intelligence as rooted in pragmatism; it was ‘the faculty to create artificial objects, in particular tools to make tools, and to indefinitely variate its makings’. The first electric streetlamps appeared in Bergson’s lifetime and are, by this reckoning, at least as sublime as the Milky Way and its galactic neighbours. That was Pratchett’s point, of course – one underscored, not undermined, by the description of human beings as ‘monkeys’.

These three books – The Digital Ape by Nigel Shadbolt and Roger Hampson, 2062 by Toby Walsh and Made by Humans by Ellen Broad – all explore the relationship between the human animal and what might be its most momentous creation yet: artificial intelligence, or AI. In particular they are concerned with how the human intelligence that created AI will be changed by it, and with how these two intelligences will rub along (or not) in the future. Could it be that human beings are on the cusp of creating a new type of Homo – what Shadbolt and Hampson call ‘the digital ape’ and Walsh calls Homo digitalis?

Shadbolt and Hampson’s excellent book – the title of which is a reference to Desmond Morris’s classic The Naked Ape (1967) – sits solidly within the Homo faber school of thought. In a series of wide-ranging chapters, its authors argue that human beings are not just distinguished by their ability to use tools but also largely shaped by it. The use of tools predates Homo sapiens by over 3 million years, and it was the characteristics and abilities ‘selected for’ in that period that gave us our opposable thumbs, fine motor control and capacity for language. Fire, for example, is a technology that allowed hominids to ‘pre-digest’ food by cooking it, with the result that more energy became available for running a large and complex brain. ‘We didn’t invent our original tools’, write Shadbolt and Hampson; ‘They invented us.’

It follows that human beings have evolved in a way that makes them dependent on tools, and that the tools we create continue to affect our sense of what it means to be human; what goes for handaxes goes too for AI and for modern machines more generally. The power and reach of the human brain, write Shadbolt and Hampson, ‘are exponentially extended – and modified – by a cornucopia of machines entwined with our desires and behaviours, rooted in a vast industrial infrastructure.’ The human brain now operates in tandem with technology – a symbiosis of which the late Stephen Hawking was both an example and a living metaphor. Our complex personal and social being is increasingly augmented by cognitive computing.

As for the question of whether that computing will itself become ‘intelligent’ in the sense enjoyed (or endured) by humans, Shadbolt and Hampson are sceptical; they are more concerned about ‘natural stupidity’ than artificial intelligence, and remind us that human sentience is the result ‘of hundreds of millions of years of descent with modification from prior living things’. But that a new kind of human being is being ‘built’ by contemporary technology they are in no doubt. The 4% of DNA that separates us from chimpanzees becomes ever more significant, as the ‘stupendous multipliers’ of technology transform our modern environment (or, as the authors prefer, ‘habitat’). Soon, indeed, we will be able to manipulate or ‘edit’ human DNA itself – a prospect as frightening as it is impressive.

While Shadbolt and Hampson are broadly confident that the digital ape will remain an ape with a new array of transformative tools, Toby Walsh is more equivocal. Indeed it is never entirely clear, in the opening chapters of 2062, whether its author conceives of Homo digitalis as, literally, ‘the evolution of the genus Homo into digital form’ or the evolution of Homo sapiens into a digitally augmented being. ‘[R]ather than replace us,’ he writes, at the end of a discussion of ‘the singularity’ – the theoretical point at which intelligent machines become capable of self-improvement and transcend the control of their human creators – ‘I am hopeful that we’ll work out how these machines can augment and extend us.’ The keyword here, I submit, is ‘hopeful’.

For Walsh 2062 is the year in which digital machines are likely to become as intelligent as humans. On the face of it this sounds like a foolhardy claim: as all sci-fi set designers know, nothing fades as fast as a vision of the future. But it is based on the opinions of experts in the field, and as such a justifiable marker. It also sets a useful limit on Walsh’s otherwise omnivorous text, which ranges from questions of work and war to issues of privacy and politics. 2062 is less a destination than a point in the skies to navigate by; it is a prediction, but a highly speculative one.

I imagine the key question most readers will have is what the author means by (machine) intelligence, and on this Walsh proves rather less confident than his selection of a date for its arrival would suggest. Not that he isn’t excellent on the directions in which AI is moving; his account of the way in which modern machines can learn from each other simultaneously in a process he calls ‘co-learning’ is insightful and illuminating. But his prediction of human-machine parity is undercut by his own admission that little is known about human consciousness and how and why it came about. What does it mean for thinking machines to be as ‘intelligent’ as human beings, when by some measures they have surpassed us already? Is it even useful to compare such forms of intelligence as are likely to emerge from the field of cognitive computing to the chaotic processes going on, right now, in my brain, as I write this sentence?

That is as much a question about us as it is a question about AI, and though Walsh is well aware of this, he lacks the rigour of Shadbolt and Hampson. Indeed I had the occasional sense – and it was only a very occasional sense – that Walsh’s notion of the human animal was based on some faulty foundations. ‘By cooperating together,’ he says at one point, ‘we live outside Darwin’s laws of evolution’ – a statement that suggests he’s mistaken ‘selfishness’ at the level of the gene for a lack of altruism: a fundamental misreading of human evolution, so far as my understanding goes. Even when highlighting the differences between machines and human beings he can sometimes strike a dodgy note. By 2062, he writes at one point, machines will write plays ‘to rival Shakespeare’s’ and paint pictures ‘as provocative as Picasso’s Guernica … But we’ll still prefer works produced by human artists. These works will speak to the human experience.’ But can art even exist outside human experience? This is a fundamental question and one the ‘Second Renaissance’ (as he calls it) inspired by AI will need to address, not least because creativity and wickedness many be more closely related than we think. Certainly we’ll need to think harder than this to retain top spot in the intellectual food chain.

If Walsh occasionally overreaches, Ellen Broad stays solidly within her area of expertise. Her lucid book Made by Humans is more narrowly focused than either The Digital Ape or 2062 and displays both the benefits and the drawbacks of that focus. On the one hand she gives us a brilliant exposition of the ethical issues attendant on AI, especially as it relates to Big Data. On the other her focus on ethical questions does tend to translate into an overemphasis on openness and regulation as a means through which to avoid the problems, or potential problems, in the AI field. As the former head of policy for the UK-based Open Data Institute (whose co-founder happens to be Nigel Shadbolt), Broad is well placed to address issues around data and who, or what, has access to it, as well as the rights and responsibilities that follow from those considerations. But those wanting a fuller exploration of what Broad terms ‘the AI condition’ should probably look to the books above.

As her title suggests, Broad’s principal focus is on how algorithms tend to carry within them the traces of their human creators. Dusting for what she calls ‘the fingerprints’ of human beings on the ‘sheen’ of AI, she finds plenty of evidence for her core contention: that there is no such thing as ‘raw’ data – the creation of which is inevitably affected by issues of sampling and ascertainment bias – and that such data as we can collect still needs to move through human minds, with all their ugly prejudices, before they end up as knowledge and policy. As Broad puts it: ‘Getting from data to knowledge – to why and what it means – still requires lots of human assumptions and choices about what the data says, and what other data points might be useful.’

There are some staggering examples of AI bias. One of the most notorious concerns the way in which the historic over-policing of black populations in the US has influenced criminal sentencing algorithms, and Broad is very good on this and on comparable controversies. The problem, as she notes, is that we have moved very quickly from systems that can predict what people might want to buy to systems that can establish what kind of people we are, and so any bias that gets into the system can have devastating consequences. Broad also makes the excellent point that the more accurate an algorithm is, the more impenetrable it is likely to be to those whose numbers are getting crunched. The digital ape is still an ape, after all, and can only hold so much data in its head.

One point made by all three of these books is that it is up to human beings in their political capacity – in their capacity, so to speak, as political apes – to shape the technology of the future. In my view that is a bigger proposition than any of their authors allow, for if history shows us one thing it shows us that new technologies will tend to evolve according to the priorities of those already in power. Today the word ‘luddite’ is used as shorthand for someone who distrusts technology. But the textile workers who followed Ned Ludd’s example and set about smashing factory machinery were not against technology per se; they were against the system that, in employing that technology, would happily rob them of their livelihoods. Theirs was not the most constructive attempt to take control of their fate. But their refusal to regard their own marginalisation as inevitable is inspiring nonetheless.

Alternatively we can always take our cue from Pratchett, in his final words to camera:

… And they were built by monkeys, who also came up with philosophy, telescopes, E=mc² … And I have to say, I’m very proud to have been one of them. Right I’m off now. You’re in charge. Oh and one more thing: don’t bugger it up.

*

Books reviewed:

Nigel Shadbolt and Roger Hampson, The Digital Ape: How to Live (in Peace) with Smart Machines (Scribe; $32.99; 346pp)

Toby Walsh, 2062: The World that AI Made (La Trobe; $34.99; 302pp)

Ellen Broad, Made by Humans: The AI Condition (MUP; $29.99; 196pp)

, ,

One response to “Digital Apes: On Humanity and AI”

  1. I think the genetic revolution is a much more scary prospect than the current AI revolution. The stratification of society based upon those whose parents had access to artificial genetic selection as opposed to those who didn’t is inevitable. An AI singularity seems far off when you look at the current neural nets whose utility is limited to the realm of the data it has been trained upon. Whats next after neural nets? Or do they just become bigger and more generalisable, well then you need even more data and even more dedicated engineers to train them, passively passing on their biases. The genetic revolution is the issue, the key part of the issue being that most of the best data is being collected in a China, a country with a weak relationship to liberal/enlightenment values. I see no Western policy in regards to either major issue unfortunately. Cheers for the article.