Sean Thomas

Are we ready for P(doom)?

It’s a greater threat than climate change

  • From Spectator Life
Thomas Cole, The Course of Empire: Destruction (1836)

It’s difficult to remember a time before climate change – a time when our daily discourse, our newspaper front pages, endless movies and TV documentaries, and Al Gore, Greta Thunberg and Sir David Attenborough (Peace Be Upon Him), were not lecturing us, sternly and constantly, about the threat to our planet from the ‘climate emergency’, the melting of the ice caps, the levelling of the Amazon jungle, the failure of the Gulf Stream, the desertification of Spain, and the complete and total cessation of snowfall in Great Britain, apart from the regular occasions when it snows. 

In fact, if you are under 30 you may not even be aware that there was a time when we didn’t vex about Anthropogenic Global Warming. For young people, this ongoing threat has been the melancholy background music of their entire lives, to the extent that some experts have wondered if it partly explains the dramatic collapse in global fertility. If the whole world is going to boil, killing off all the birds and bees and turning Suffolk into the Sahara, what is the point in having babies, if you’re only adding to the problem?  

I am not immune to this concern. I have travelled the world for decades and I have witnessed the increasing volatility of weather patterns. I’ve seen spring arrive earlier, almost everywhere. I’ve also seen the vile way we pollute our precious planet. Even if the more extreme predictions of climatic disaster are overdone, we need to clean up our act. 

And yet, I can’t help also noticing that – even if climate change is a serious problem – we are strenuously and bizarrely ignoring a much greater threat, which is way more imminent, and which, if the worst outcome occurs (and many think it might) will make the burning eucalypt forests of New South Wales look like a trivial inconvenience.

I’m talking about P(doom). The fact you have probably never heard of P(doom) only underlines my thesis. Almost no one has heard of P(doom) – go on, try it on your friends, and see. So, what is it? 

Simply put, P(doom) is a percentage which expresses the probability of Artificial Intelligence causing something so bad for humanity it will feel like Doomsday, or actually be Doomsday. The precise severity of the P(doom) scenario varies from prediction to prediction. For some experts it means ‘merely’ that the robots will enslave us, for many others it means the total extinction of Homo sapiens.

In almost any form, this AI ‘doom’ will come much faster and be vastly worse than the most pessimistic prognosis of climate change. Climate change means we get more hurricanes, maybe London is flooded, millions of people move to Canada to escape the heat. The doom in P(doom) means we are exterminated, or we become like cattle to our robot owners, or the machines start a world-ending nuclear war, and so on. 

So how likely is an AI Doomsday? Helpfully, there are people who collate these estimates. I got these from the website PauseAI.com. Here are the P(dooms) of true experts in the AI field.

Yann LeCun
one of the three ‘godfathers of AI’, works at Meta
P(doom): less than 0.01 per cet

Vitalik Buterin
Ethereum founder
10 per cent

Geoff Hinton
one of the three godfathers of AI
(‘chance of human extinction in the next 30 years if AI is unregulated’)
10 per cent

Lina Khan
Head of FTC (the US government body monitoring AI safety)
15 per cent

Paul Christiano
AI alignment expert and AI advisor to the UK government
10-20 per cent, but up to 50 per cent if we get human level artificial general intelligence (AGI)

Dario Amodei
CEO of Anthropic (a major AI company)
10-25 per cent

Yoshua Bengio
Another one of the three godfathers of AI
20 per cent

Elon Musk
CEO of Tesla, SpaceX, X
20-30 per cent

Emmet Shear
Co-founder of Twitch, onetime CEO of OpenAI (creators of GPT4, ChatGPT and Sora)
5-50 per cent

AI Safety Researchers
(Mean from 44 AI safety researchers in 2021)
30 per cent

Scott Alexander
Popular Internet blogger at Astral Codex Ten
33 per cent

Eli Lifland
AI risk assessment expert
35 per cent

Holden Karnofsky
Executive Director of Open Philanthropy (non-profit AI research company)
50 per cent

Jan Leike
Alignment head at OpenAI
10-90 per cent

Zvi Mowshowitz
AI researcher
60 per cent

Daniel Kokotajlo
OpenAI researcher & forecaster

70 per cent

Dan Hendrycks
Head of Centre for AI Safety
>80 per cent

Eliezer Yudkowsky
Founder of the Machine Intelligence Research Institute
>99 per cent

As you can see, these are deeply scary numbers, apart from the first one, by Yann LeCun, the head of AI at Meta. LeCun thinks the chances of AI kicking off the apocalypse are less than that of an asteroid hitting the earth, and no one worries about that. Unfortunately, two weeks ago, Yann LeCun publicly scoffed at the idea that AI companies are close to making machines capable of ‘text-to-video’ (creating moving images from verbal prompts). Two days later, OpenAI launched Sora, which does exactly what LeCun said was impossible. So LeCun might not, these days, be the most reliable of AI forecasters.

That leaves us with all the other numbers, from around 10 per cent to over 99 per cent. However you spin it, these are terrifying percentages. We are creating technology which even its creators believe has a serious chance of annihilating humankind. Nor is this ‘doom’ some far distant calamity that will concern us centuries ahead. Most of these predictions are based on the attainment of Artificial General Intelligence, and the chaotic years that will likely follow that (possibly leading to Artificial Super Intelligence, as the robots learn to improve themselves). For a long while it was thought that AGI was decades away: the idea was we might reach it in the 2040s, or the 2080s, or never. Now the consensus is that it will be achieved within the next five to ten years, if it is not here already.

Why then, are we not freaking out about P(doom), even as we witter on about wetter winters and the modest possibility of the Maldives submerging? One reason might be fatalism. There seems to be no way of stopping the march to AI: even if we persuaded every company in the West to cease all research now, we can’t stop the Chinese or the Russians continuing, in secret (and this is easily done in secret). And the temptation to achieve AGI will be too great to resist, because whoever gets to AGI first will wield enormous power. Or at least they will, for a few brief months – before the impatient computers take over and turn the world into a pile of paperclips, or a heap of irradiated ash, or a zoo where the robots can gawp at the hairless apes that foolishly birthed them.

However, I do not believe it is mainly fatalism that makes us resistant to the problem. I believe it is a mix of normalcy bias (the inability to anticipate and respond to extraordinary events) combined with basic ignorance. Most people have no idea how close we are to true AGI, and even more people have no idea how dangerous AGI might be. And so we trot along, fretting about the extinction of cute tree frogs in Bolivia, even as the boffins in Silicon Valley create machines which could evaporate us entirely, turning us into ghostly ancestors: vaguely remembered by the thinking machines.

Comments