Kenneth Payne

Can regulation stop artificial intelligence wiping out humanity?

Credit: Getty Images

The arrival of super intelligent artificial intelligence (AI) could wipe out humanity. That’s the fear of leading AI figures, who have today signed an open letter calling for AI safety to be a ‘global priority’. Geoffrey Hinton, a legendary figure among AI researchers, recently left Google and sounded the alarm. ‘It’s the first time in mankind’s history that we’re on the brink of developing something smarter than us,’ he told The Spectator. ‘We may be history’. Sixty-one percent of Americans polled recently by Ipsos agree with him that AI is a threat to civilisation.

Sam Altman, OpenAI’s CEO and the chief architect of ChatGPT, is an enthusiastic advocate for more powerful AI and the positive changes it could bring. But appearing recently before Congress, he also articulated his ‘worst fears’ that the technology ‘causes significant harm to the world’. He also called for regulation, startling Senators unused to businesses advocating for tighter government control.

Altman met Rishi Sunak in Downing Street last week, presumably to repeat his message. Also attending was Demis Hassabis, founder of the remarkable DeepMind, the London-based AI company dedicated to ‘solving intelligence’. After the meeting, No. 10 acknowledged the ‘existential’ risk of AI for the first time.

The Centre for AI Safety statement released today states that:

‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

They aren’t the only group to make the nuclear comparison. OpenAI recently published a memo on AI governance that explicitly makes the comparison. We are, the company argued, likely to need something like the international nuclear regulator for superintelligence efforts. The International Atomic Energy Agency has, since the mid-1950s, coordinated intergovernmental efforts to promote the peaceful use of nuclear energy and guard against the proliferation of nuclear weapons, arguably with some success. 

Is the nuclear weapons analogy the right one? On the surface the comparison seems intuitively plausible. Both are expensive, complex technologies with huge risks. On closer inspection, however, the analogy seems tenuous.

For one thing, unlike a nuclear explosion, there’s no settled idea of what superintelligence even is, let alone how it might emerge. Today’s AI is already superintelligent at some tasks, like protein folding, but remarkably dim at others. Authentic superintelligence would presumably have to surpass human intelligence across the board, not just in narrow domains.

How to get to superintelligence is, likewise, unclear. Will it be enough to continue along the current path staked out by ChatGPT – with a network of artificial ‘neurons’ trained on a vast corpus of data running on ever more powerful hardware? No one knows, not even OpenAI. 

OpenAI’s calls for a nuclear-like regulator would, they say, apply to systems ‘above a certain capability (or resources like compute) threshold’. That’s problematic on two counts. The first approach measures output, but it is hard to regulate output when you can’t accurately specify what you are looking for. The alternative, measuring by input, most likely computing power, is little better. Building state-of-the-art AI today is computationally demanding and so hugely expensive. That’s one reason DeepMind joined forces with Google, who possessed hugely powerful computers and reams of data on which to train DeepMind’s models. But plenty of other activities need powerful computers too. And of course, today’s world-leading supercomputer is inevitably tomorrow’s laptop. Besides, it’s not self-evident that superintelligence will need all that much power. The secret sauce might not be a scaled-up GPT-like network, but some other technique. Again, if you can’t specify what it is you’re regulating, there’s little chance of regulating it.

Nuclear regulation was comparatively straightforward for another reason – governments were the only actor that mattered. Bomb making required rare uranium and plutonium isotopes, necessitating a major industrial effort. States retained firm control of the means of production and, in the interests of national security, corralled the necessary expertise. Of course, they monopolised the output too.

The relationship between public and private actors is more complex this time. In the last two decades, a handful of tech giants have come to dominate AI research, drawing expertise away from the universities and government, and dwarfing the expenditure of all others, even the Pentagon. It’s certainly possible to regulate private actors like these. Meta has just been fined €1.2 billion by the EU for mishandling users’ data. But there are challenges. These global companies are so large that any one government is small beer – even a state as large as the UK, currently wrestling with Meta over backdoor access to its users’ encrypted WhatsApp messages.

Even supposing you could regulate these modern titans, there’s another problem: you might be looking in the wrong place. A recent memo apparently authored by Google insiders argued that power might be moving away from the big players. What if the giants, Google, Microsoft, or Meta, get part way towards superintelligence via a large, proprietary model, but this then leaks to third parties who finish the job?

Even supposing all these challenges could be addressed, there’s a final, likely insurmountable difficulty: the security dilemma. Fear of an adversary’s capabilities provides a powerful incentive to acquire your own. When it came to atomic energy, the tightly-policed arms control regime amounted to winners justice. Having developed the bomb, the US (and a few other states) pulled up the drawbridge, via the establishment first of the IAEA and then the Nuclear non-Proliferation Treaty. Only a handful of states thereafter came to possess nuclear weapons, and some that sought breakout capabilities have been thwarted by rivals.

That sort of tight control will be impossible this time. AI research, like nuclear expertise, is unevenly distributed between states, but the international barriers to entry this time are far smaller; computer power isn’t as scarce and trackable as uranium ore. If superintelligence is as transformative as OpenAI and others suggest, there’s a powerful incentive to cheat and plenty of opportunity to do so. You mightn’t even need the most brilliant computer scientists – just wait for a promising lead to emerge from a competitor, and then seed your research effort with it. Meanwhile, in contrast to the poorly camouflaged silos of the Cuban missile crisis, you could smuggle your finished superintelligence around on a hard drive. Proliferation looks comparatively easy. 

Superintelligence still seems incredible. The fashionable excitement of recent weeks may soon fade as ChatGPT is absorbed into the everyday, like other technological innovations before it. There are many more immediately pressing concerns for government and society alike. The problem, though, is real: a small, if growing risk to humanity, with potentially devastating consequences. OpenAI’s regulatory suggestions are inadequate, but the real service Altman and co. have done is to sound the alert. Fresh ideas are needed, and quickly.

Kenneth Payne is a Professor of Strategy at the Defence Studies Department, Kings College, London and the author of I-Warbot: The Dawn of Artificially Intelligent Conflict.

Comments