From the magazine

The end is nigh – or is it?

Two AI aficionados sound the alarm in this blend of third-rate sci-fi, low-grade tech analysis and bad geopolitical assessment

James Ball
Anti-AI protestors outside the Google DeepMind offices in King’s Cross in June 2025 Vuk Valcic/Alamy Live News
EXPLORE THE ISSUE 18 October 2025
issue 18 October 2025

James Ball has narrated this article for you to listen to.

When most people start screaming that the sky is falling, they can safely be ignored. But Eliezer Yudkowsky is not most people. He was one of the first to take the idea of superintelligent AI – artificial intelligence that greatly surpasses humanity – seriously. He played a role in introducing the founders of Google DeepMind to their first funder; and Sam Altman, the CEO of OpenAI, credited Yudkowsky as a man who was ‘critical in the decision’ to start the organisation. His influence goes still further – he was a key thinker motivating the effective altruism movement and its founders, and the wider rationalist movement to which they belonged.

Through his rationalist Harry Potter fanfiction – yes, really – he inspired both Caroline Ellison, the CEO of Alameda Research, the company behind Sam Bankman-Fried’s multi-billion-dollar cryptocurrency collapse, and the activist known as ‘Ziz’, whose extremist rationalist cult is accused of leading to a string of high-profile murders and suicides in the US.

Now Yudkowsky and his co-author, Nate Soares, are sounding the alarm in If Anyone Builds It, Everyone Dies. Neither man can be accused of pulling his punches. The title is neither hyperbole nor metaphor: the authors say that unless current AI development is stopped, and fast, it will kill us all.

That argument is certainly urgent, but it’s not well developed. The book builds on Yudkowsky’s roots as a fan-fiction writer by opening every chapter with a ‘Just So Story’, a parable for the modern age to illustrate its argument. The middle section is given over to a near-future sci-fi scenario in which a fictional AI company, Galvanic, develops an AI Sable which (spoiler alert) goes on to kill off humanity through a lab-engineered form of mega–cancer. The first people it targets? AI researchers, of course.

Sadly, the parables have all the verve and depth of a school play about road safety. One asks readers to imagine that ‘there was a tiger god that had made tigers, and a redwood god that had made redwood trees’, one for each species, playing a game to ‘attain dominion’ for their species. Two million years ago ‘an obscure ape god’ said that, though it would take ‘a few more moves’, he had won already – because humanity had developed the ability to think. The other gods were baffled: the smallpox god could kill humans, scorpions could poison them, and so on. But we, the readers, are invited to look on humanity’s ascendancy – at least until we’ve invented machines that can out-think us.

The basic idea that an intelligent AI might have different values to humanity is explained through a six-page lesson about alien birds keeping stones in their nests. That AI might have more advanced ways to kill us than we currently understand is explained through the Aztecs confidently awaiting Spanish invaders, unaware of the existence of guns. A theme soon emerges in the various stories: sceptics scoff at the wiser, alarmist character shortly before they die horribly.

The single thing all doomsday scenarios have in common is that eventually one of them will be correct. And perhaps this is the one. But the authors seem to skip an essential part of their thesis. The book accurately argues that AI development has gone faster than expected and deftly explains how current models work and that no one fully understands their reasoning or their actions. It then reasons that it would be so easy for a much more advanced AI to kill us all – but skips over the route to such an AI: why safeguards would inevitably fail, or why we would necessarily be killed.

‘Could you turn this into bad news, please?’

Possibly unintentionally, the authors offer up one fantastic way to prevent AI exterminating humanity. Their proposed means of delaying or preventing the rise of superintelligence would stand a huge chance of starting a global thermonuclear war, saving AI the job. Yudkowsky and Soares suggest that nuclear powers should make it clear they would track closely all AI research and strike militarily any country engaging in such practices. These superpowers would, apparently, trust one another to stick to their commitments. China would presumably not object if the US attacked North Korea over AI research. While the authors often admit they are stepping outside their expertise, that doesn’t prevent them from making huge, confident recommendations.

If Anyone Builds It, Everyone Dies blends third-rate sci-fi, low-grade tech analysis and the worst geopolitical assessment anyone is likely to read this year. The book did not convince me that the end was nigh. But the process of reading it did at times make me think I’d welcome it.

Event

Speaker Series: An evening with Bernard Cornwell

  • Westminster, London
  • £27.50 – £77
Book now

Comments