Why AI Evolution Looks More Like QAnon Than Blade Runner
Samuel Butler warned us about machine evolution in 1872’s Erewhon. But the real threat today isn’t sentient robots – it’s digital ecosystems wreaking havoc in ways science fiction never imagined
Welcome everyone to a bonus end(ish)-of-year edition of Techtris. This one features an essay I published in my most recent book The Other Pandemic: How QAnon Contaminated The World, and it’s on Dune, Terminator, The Matrix and their roots in an 1872 dystopian travelogue called Erewhon.
But mostly it’s about our lack of imagination when it comes to digital forms of life – why do we only imagine machines acting like humans? Why not like other forms of biological life – why not bacteria?
If you find it interesting, do consider buying the book (for yourself or for a loved one), or hitting subscribe below for occasional emails along similar lines. Merry Christmas, Happy Hannukah, Feliz Navidad, Happy Holidays, and all that –James.
For as long as we’ve had computers, it feels like we’ve been dreaming of how they might destroy us. While there is no shortage of academic or philosophical thought on this front, it’s most apparent in popular culture, especially science-fiction.
Most stereotypically, we’ll picture something like the Terminator franchise: we create ever-more intelligent machinery and hand over more and more of the reins to society to it. Eventually the machines become so intelligent they gain sentience, and eventually come to the conclusion their creators must be destroyed.
In the case of something like Terminator, the logic we imagine machine life adopting is unnervingly human – we must be abolished because we would subjugate new technological life, or else pose a threat to it.
More modern conceptions of artificial intelligence as a threat to humanity imagines a badly-programmed algorithm: a factory AI designed to maximise paperclip production, for example, might start breaking down other machinery for paperclips, hijacking mining operations, using the iron in blood for paperclips, and killing anyone who seeks to disable it – because that would slow paperclip production.[i]
But concern about the rise of machinery actually predates computing itself – as one bizarre but compelling 19th century satirical utopian novel set forth. That novel, Erewhon,[ii] was published (initially anonymously) by Samuel Butler in 1872, less than 20 years after Charles Darwin published his seminal text on evolution, On the Origin of Species, and is clearly at least in part inspired by it.
Erewhon is a story about a traveller in a strange land – Erewhon – and takes the form of his fictionalised travelogue. So far, so Victorian. In this land, crime is treated kindly, and is not punished. Those who steal, commit violence, or even murder, are generally pitied and offered help and treatment. Those who fall sick, however, are condemned and often sentenced to death – leading to some of the sickly to feign alcoholism to explain their symptoms and receive kinder treatment.
This is but one of many oddities used by Butler to highlight issues in Victorian society, but one stands out across the generations: Butler’s imagined society in his 1872 book had banned all machines, a good 400 years before. His imagined traveller sets out the reasoning as such:
“About four hundred years previously, the state of mechanical knowledge was far beyond our own … until one of the most learned professors of hypothetics wrote an extraordinary book proving that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life.”[iii]
So convinced by the professor’s ideas were the denizens of Butler’s fictional kingdom that hundreds of years prior to the traveller’s arrival they had already purged all technology that wasn’t at least 271 years old – and then had banned all new technology after that point.
Such an idea might not feel like an especially original one for a satirical or utopian work of fiction now, but Butler’s work came long before anything remotely resembling modern computing had ever been conceived. Charles Babbage, sometimes credited as the father of modern computing, had introduced his “difference engine” some 50 years before, but it had never been built and was something of a thought experiment.
As Butler’s novel shows, even long before computers could analyse – let alone think – we had a preoccupation that the evolution of mechanical (or digital) intelligences could do us harm. Butler did not stop at a few short paragraphs, though, turning over three chapters later in his book to write several chapters from the “professor of hypothetics” who had turned the people against machines. The danger, that 19th century book set forth, was the faster pace of mechanical evolution:
“There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing.”[iv]
Butler himself was not, in reality, writing a crude “beware of the machines” book – his thesis was generally in favour of moderation and against extremes of any sort. It’s not at all clear we’re supposed to believe what the citizens of Erewhon did on machinery was either genius or folly. Instead, he’s raising an idea well before its time, and encouraging us to think where that sensible middle response might be.
Instead, Butler seems to have succeeded in embedding human-machine conflict into the centre of science fiction and dystopias. Frank Herbert’s Dune series of books have sparked multiple adaptations – including a joyously camp movie, a beautiful but dour one, and multiple great video games[v] – and feature numerous staples of the genre.
It includes a chosen one on whom the fate of the universe rests, a return to feudalism in a far-futuristic society, a caste of warrior nuns with mystical powers,[vi] and a much fought-over resource that holds the key to intergalactic travel.
What they don’t feature, though, is any kind of computer device. Conflict between man and machines makes absolutely no part of Herbert’s narrative (which spans multiple books and many thousands of pages). Instead, thinking machines had been eradicated years before in the “Butlerian Jihad” – a very clear nod to Samuel Butler – and everything up to and including spaceships in the series are simply operated manually.
Herbert, a man who wanted technology out of the way so his science fiction could focus on religion and prophecy, nonetheless chose the reasoning of technology dystopias to dispense with it – rather than say advanced technology would be boring, he explained a world without it.
Most other science fiction centres of conflicts between humans and emerging artificial ‘life’ – the Matrix imagines humanity enslaved by thinking machines which have supplanted their place. Battlestar Galactica imagines humans and human-like machines, the Cylons, and the complex relationship between the two – the latter nearly destroys humanity, before realising they are more like their ‘parents’ than they might hope. Blade Runner too explored humanity’s difficult relationship with technology.
These works of fiction all share various traits. Most obviously, they picture humanity facing off against artificial intelligence that closely resembles humanity: it can communicate with us, it often looks like us, it is a sophisticated multi-cellular organism capable of abstract thought, which makes deliberate plans.
That’s perhaps necessary for works of fiction. AI resembling something like a colony of ants, or a swarm of bees, might be fascinating for research, but it would probably make a terrible movie antagonist – unless they were suddenly given human-like intelligence and an ability to monologue at the right point in the film.
A common thread throughout all these fictionalised portrayals of emerging intelligence from mechanical technology, networks, or artificial intelligence is that it can evolve faster than we can – the same idea that Dawkins seized upon when proposing the meme as the successor to the gene.
In the internet era, this seems simply true: memes are essentially like genes – the building blocks that come together to make DNA and in turn organisms, plus non-living but complex systems like viruses. A collection of memes – satanic child abuse, Antisemitism, “do your own research” – stick together and make a new version of something like QAnon, which reproduces across the world, and occasionally evolves into a new version.
But it feels like science fiction skips to the last step of that logic – or at least the last step so far as humanity is concerned – and imagines that rapid evolution producing something rather like us. It skips the potential early steps, the bacteria, the amoebas, the fish, and so forth.
That means fiction doesn’t equip us to deal with challenges like digital pathogens, with challenges like QAnon and the global anti-vaccine movement. They have no structures and they have no leaders because they are products of their environment – the information ecosystem. Perhaps in time these digital microbes will evolve into something more closely resembling human intelligence. Perhaps some kind of more deliberately developed AI will do so. Perhaps nothing ever will.
It doesn’t matter whether we choose to see the idea of digital pathogens emerging and evolving as literal or metaphorical – we know from chaos theory and similar ideas that complex patterns can emerge from simple or even random systems.
One classic example of this is Langton’s Ant – a programmed ‘ant’ on a grid of white squares, programmed with simple rules: if the square is white, turn it black, then turn left and move forwards. If the square is black, turn it white, then turn right and move forwards. The first surprise is how random and chaotic the emerging black-and-white picture looks.
The second is that after several thousand moves, suddenly an endlessly-repeating and seemingly planned pattern emerges, sending the ant perpetually to the bottom-right of the screen[vii] and making a complex repeated pattern on the way.
Whether digital pathogens are actually evolving or whether they’re patterns emerging from much more complex algorithmic rules – those that govern what we see on YouTube and social media – than those of Langton’s Ant is beside the point. Either way we are contending with something that has emerged as a new and unexpected consequence of our new digital ecosystem.
This means that we cannot think of this as a series of isolated problems – to do so would be to play whack-a-mole on a global geopolitical scale. Each part of the ecosystem that produces phenomena like QAnon is connected to each other part. Tackling them in isolation will cause damaging knock-on effects.
Think back – if you can – to learning about food webs (or food chains) in biology at school. We might get given the very simple example that:
Corn is eaten by grasshoppers, which are eaten by rats, which are in turn eaten by snakes.
Let’s imagine in this situation that those rats are starting to invade our homes, leading to calls to wipe them out. Rat exterminators are despatched and do their job brilliantly, wiping out 90% or more of the rat population. But that makes for every grasshopper’s dream: no predators.
The population of grasshoppers booms, eating more and more of the corn crop, until that becomes a severe threat to the harvest. Realising their mistake, the human population of the area rescinds the rat catchers and even reintroduces a few rats, to control the grasshopper population.
But while the grasshoppers were booming, the snakes were having hard times: deprived of their only source of food, most starved. And so, once the rats were reintroduced, they had no predators and ample food – and so their population now booms. One problem causes another, and another, and another in turn – an ecosystem once in balance is unbalanced by each well-meaning intervention that doesn’t look at the system as a whole.
As offline, so online. In the first half of 2021, long after they should have, social networks like Facebook and Twitter finally started removing QAnon accounts en masse. But those accounts had lots of warning this was coming, and so the clampdown moved millions of users onto much more extreme private channels, such as Discord or Telegram.
These new users were then in the kind of networks that much more rapidly radicalise those there present. Looking at only part of the system and making only one intervention had caused a new problem in turn. It will carry on doing so for so long as we carry on taking our whack-a-mole approach.
Just as real-world food webs are never as simple as the example above (which, if we’re honest, is more of a single food thread), online ecosystems are large and complex. But critically they do not stop online.
We could imagine the information ecosystems to be online, but with real-world consequences, and that would be bad enough. That’s the world where people are radicalised into becoming mass shooters, or where people are cut off from their families, or where polarised societies vote in dangerous populists.
But online information networks feed offline ones: Fox News feeds off the online extremist right, picking up its most successful storylines and regurgitating them, attracting new recruits to the causes. Australian prime minister Malcolm Turnbull, in an apology for very real and decades-long failings on institutional child abuse, added the world “ritual” to his comment, spurring new theories and new curiosity.
As the Republican party – much of it now beholden to its QAnon-infested base – moved towards the 2022 midterms, and in turn the 2024 election, it was no coincidence that suddenly elected representatives talked at almost every opportunity of “grooming”, whether “liberal”, “woke” or “LGBT”. “Grooming” is very much the language of QAnon, rehabilitated just enough for supposedly mainstream political discourse.
QAnon as a discrete phenomenon reached and convinced tens of millions of people around the world. The extreme wing of the antivax movement that it largely merged with captured tends if not hundreds of millions more. Those numbers alone should be enough to convince us that tackling these online ecosystems is essential, even if it’s enormously difficult.
But if it isn’t – if we convince ourselves that society has always had its fringes, and this is just the same as it ever was – we need to remember the interaction between fringe media, and fringe ideas, and mainstream media and politics. When opportunists see a voter base or a consumer base large enough, they will seek to profit – either financially or with power.
These new online ecosystems have become integrated with our existing ones, and they will not – can not – be disconnected. This is the world we live in now.
[i] This is a longstanding thought experiment, but also a genuinely fun and disturbing online game: https://www.decisionproblem.com/paperclips/index2.html
[ii] Read the word backwards
[iii] Erewhon, p97
[iv] Ibid, p199
[v] Dune 2 is the usual go-to here, but personally I much prefer the point-and-click Dune adventure, in all its joyous 1980s-ness.
[vi] Yes, I know they’re not actually nuns
[vii] If it starts facing upwards
Uncheery reading, but thanks for the introduction to Langton's Ant!