On 27 June, 1835, two masters of the ancient Chinese game of Go faced off in a match which was the culmination of a years-long rivalry. The young prodigy Akaboshi Intetsu dominated the game early on using a secret move developed by his teachers. But a day into the contest a number of ghosts appeared to his opponent, Hon’inbo ¯ Jo ¯ wa, and showed him three critical moves with which he was able to win back control of the game. At the point when it became clear that Akaboshi would not be able to win the game, the young challenger violently coughed up blood onto the board. He was found dead a few days later. The match between Akaboshi and Jo ¯ wa has passed into Go lore as the ‘blood-vomiting game’, and subsequent historians have attributed Akaboshi’s decline to an undiagnosed pulmonary disease. They have been less forthcoming, however, on the matter of ghosts, which may still haunt the game to this day.
On 29 December, 2016, a new player appeared on Tygem, a popular online Go server on which many senior Go professionals trained and tested out new moves. The player was called Master, and immediately they began a blazing winning streak: sixty victories in just seven days, and barely resting between games. Many of the victories were over world champion players. Master’s moves often seemed wild, even impetuous, but they always resulted in a win. After the fifty-ninth game Master was revealed to be – as had been suspected – not a human player, but an Artificial Intelligence. Master was the latest iteration of DeepMind and Google’s AlphaGo programme, which had gained worldwide attention when it defeated Go master Lee Sedol six months earlier.
That game had been close, but the New Year games were already markedly different. When Go players tried to describe the AI’s style of play, they struggled to reconcile it with anything known. One leading Go player said, ‘they’re how I imagine games from far in the future’. Another reported feeling that an ‘alien intelligence’ had landed among them. The machine’s own creator, Demis Hassabis, said its moves seemed to emanate ‘from another dimension’.
The last great revolution in human-machine competition occurred in 1997, when IBM’s DeepBlue defeated Garry Kasparov at chess, up to that point a game with Go-like status as a bastion of human imagination and mental superiority. But compared to AlphaGo, DeepBlue might as well have belonged to the steam age; immensely powerful, IBM’s machine lacked anything we would call intelligence. It brute-forced Kasparov off the board, calculating games many moves ahead – but merely that: calculating. AlphaGo and its kind perform something more akin to imagination and intuition, and moreover they do so in mathematical realms the human mind cannot comprehend. While we can follow DeepBlue’s line of thought, the thinking behind AlphaGo’s decisions remain unknowable to us – and hence alien and otherworldly.
To call AlphaGo and systems like it an Artificial Intelligence is in some ways an exaggeration. It is a very narrow form of intelligence directed at a particular task, based on one particular computational configuration – neural networks – and a technique called reinforcement learning. These are pieces of software modelled loosely on parts of the human brain, which are trained on a reward system which encourages them to develop their own strategies. Despite this narrow focus, the benefits are generalisable; as well as learning other games, the techniques developed for AlphaGo have been deployed by Google in everything from medical diagnoses to YouTube recommendations. Machine learning is being used by others to screen applicants for jobs, pilot self-driving cars and target military drones. Neural networks are live and connected to the stock market, to distribution systems, to transport infrastructure – to the very social, material, and economic bases of our daily existence. It’s not just AlphaGo’s ‘god-like’ moves we have to contend with, but inscrutable decisions made about jobs and finances, healthcare and road safety – and the sense of mystery, surprise, strangeness and even horror that AlphaGo evokes will become a feature of more and more areas of our lives.
The increasing complexity of the world around us should be cause for political and social concern, as intelligent but unknowable software works its way through society. But it’s also an opportunity to rethink our relationship with the wider world, and to reconsider our place in it. It seems significant that we are investing so much time and energy in building these toy versions of our own minds, just as our ability to control our own destiny and live on the planet sustainably appears to be failing. That failure is in part one of hubris: the belief that we can, as the planet’s dominant species, continue to act selfishly, wastefully and without regard to the future. But with AI comes the sense that we might not be the dominant actor for much longer – and an attendant opportunity to really consider what it means to share the world with other, barely knowable intelligences.
Because of course we’ve shared the universe with other intelligences for a long time, and we’ve handled the situation pretty badly. We have consistently downgraded or reclassified forms of intelligence that do not resemble our own narrow definition, and as a result felt free to treat their possessors as lesser creatures, lower orders of beings, or not really as things at all. To ignore, consume, despoil and poison them, both to their detriment and in the final, devastating, analysis, to our own.
And yet the last few decades have also seen the slow murmurings of recognition of other ways of thinking and being in the world, which appears to us as a sudden flowering of forms of intelligence which differ radically from our own. More and more species are being admitted, grudgingly, to the community of those that really think, from orangutans to elephants, both of which have recently been granted legal personhood in court cases. As we recognise the differing forms of intelligence present in both AI and other species, the business of assigning rankings to other creatures begins to seem as stupid and violent as assigning them to different races. This too is only the beginning of what we might recognise, only the beginning of another strangeness, if we choose to see intelligence as something that belongs not only to humans, and not only inside the human head.
Octopuses in aquaria are now known to recognise individual humans, and to prefer some to others, squirting water at those they dislike. Disliking brightness, they squirt water at light bulbs above their tanks to extinguish them too: they are not merely aware of their environment, but seek to manipulate it. But octopuses, unlike apes and elephants, are also distinctly alien creatures, separated from the mammals by millions of years of evolution, with networks of neurons distributed throughout their entire bodies. And perhaps ours too: the health and diversity of the human microbiome, the 2kg of other species we carry around with us – mostly in the gut – and which outnumber our own cells ten to one, has been shown to have measurable effects on our cognition.
At the other end of the scale, the largest organism in the world is a forest in Colorado, a hundred-acre expanse of cloned aspen sharing a single, 80,000 year-old interconnected root system. And like all forests it feels, processes, and communicates. Recent scholarship has revealed the social relationships and collective intelligence of trees, which share resources, form alliances, and recognise distress in others, sending both aid and warnings through tap roots and pheromone clouds, much in the way that insect colonies do. Under such circumstances, the strangeness of mere toy intelligences begins to pale.
For a long time we have been as unheeding of these intelligences as we have been deaf to the frequency of electrons, and blind to the ultraviolet light that soaks the plants around us. But they have been here all along, and are becoming undeniable, just as the capacities of our own technologies threaten to supersede us. After wilfully ignoring the intelligences of others for so long, the centrality of human intelligence is on the point of being knocked violently aside by our own inventions. A new Copernican trauma looms, wherein we find ourselves standing upon a ruined planet, not smart enough to save ourselves, and no longer by any stretch of the imagination the smartest ones around. Any appeal to survival will have to be made both to technology and other non-human intelligences, and it will be possible only if we are prepared to accept the toy intelligences we’re building not as yet more indications of our own superiority, but as intimations of our ultimate interdependence, and as calls to humility and care.
James Bridle is an artist and writer working across technologies and disciplines. His artworks have been commissioned by galleries and institutions and exhibited worldwide and on the internet. His writing on literature, culture and networks has appeared in magazines and newspapers including Wired, Domus, Cabinet, the Atlantic, the New Statesman, the Guardian, the Observer and many others, in print and online. New Dark Age, his book about technology, knowledge, and the end of the future, was published by Verso (UK & US) in 2018.
This piece is part of our Life Rewired Reads, a selection of essays commissioned in response to Life Rewired, our season exploring what it means to be human when technology is changing everything.