From the magazine

Controlling AI is the great challenge of our age

The genie is only half out of the bottle, says Richard Susskind, but we should be in a state of high alert – and anyone who thinks otherwise is ‘plain daft’

Frances Gibb
A seminal moment in the advance of AI: the world chess champion Garry Kasparov is beaten by the IBM computer Deep Blue in 1997.  Getty Images
EXPLORE THE ISSUE 22 March 2025
issue 22 March 2025

In 1997 the world chess champion Garry Kasparov was beaten by an IBM computer system called Deep Blue. It had defied all expectations, exploring some 300 million possible moves in one second. The most that skilled chess players can contemplate is about 110 moves at any given time.

It was a seminal moment in the advance of artificial intelligence – even if not fully understood, writes Richard Susskind in How to Think About AI. People did not wholly grasp the impact of the exponential power of computers, nor that new ways would be found to develop systems that could achieve human expert-level performance.

Fast forward to 2016 and to AlphaGo, a machine designed to play the complex game Go, which has more possible moves than atoms in the observable universe. That year, AlphaGo beat Lee Sedol, a world-class Go player, by four games to one.

Further advances followed, until ChatGPT in 2022, and what Susskind describes as ‘the most remarkable breakthrough in my 40 years of working on AI’. This was also a milestone in public recognition of the potential of AI systems. ChatGPT – a chatbot that mimics human conversation – can answer almost any question in ordinary language. Classified as ‘generative’ AI, it can produce content on demand. And not just text: similar systems generate art, music, video and even high-quality code-writing software.

This brave new world generates amazement – but, with it, alarm. Susskind’s timely book comes as the country’s leading artists, actors, musicians and writers, backed by newspapers and other publishers, are running a campaign to highlight the threat posed by unregulated AI to their industries.

Is AI a force for good or bad – and can its development be controlled? Susskind, a lawyer as well as a tech guru, is well placed to balance benefits with threats, being neither an ardent AI apologist nor a neo-Luddite. His analysis, aimed at the lay reader, is that these are early days; that the most powerful digital technologies are yet to come – a prospect he finds at once exhilarating and deeply unsettling. Machines are becoming increasingly capable and their under-pinning technologies advancing at an exponential rate. Are we, he asks, building ourselves a monster?

In a well-researched mix of historical analysis and personal reflection, Susskind refrains from offering concrete answers to problems – where to draw the regulatory line, for instance, on the current copyright debate; or how to deal with the threat to jobs through AI driverless cars, which will have ‘colossal consequences’ for 10 per cent of the labour force. Rather, his main message is that governments, institutions and business must be vigilant about harnessing AI’s benefits and managing its risks – not just the ‘generative’ AI that exists now, but future super-intelligent machines which will match or exceed human performance. Those sanguine about the risks, he warns, have ‘not thought deeply enough’. To dismiss talk of existential risk or potential catastrophe is ‘either disingenuous or plain daft’. We should not just be uneasy or disorientated but in a ‘high state of alert’.

How will it pan out? Will machines replace us – and do we unite with them, coexist at a distance or shut them down? Susskind, thankfully, concludes that even if the threats to humanity from the weaponisation of AI, or through accidental destruction, are foreseeable or possible, they are unlikely. But doubters should not shrug off a credible if improbable threat.

Finally, in what feels almost like a segue into a different book, Susskind takes a philosophical plunge into bigger moral questions concerning the long-term relationship between humanity, super-intelligent machines and the cosmos itself – touching on his personal fears for the future of humanity and the planet. He envisages AI-enabled worlds where we may not even know if systems are conscious or not, or if troubled by moral dilemmas. Humans may spend more time in virtual reality. If so, who is the creator? God, or any theological considerations, are left aside in his hypotheses. But then perhaps AI itself is the new god.

Susskind believes the genie is only half out of the bottle: that computers are still within our control. Yet, for all his enthusiasm for AI’s benefits, he sets out a sobering vision of a dystopian future universe. We must ‘shout loudly’ for the continued future of humanity (who would not?) and act in the next decade through national and international law to shore up civilisation.

It is a wary endnote. But preserving humanity with and from AI he sees as the big challenge of our age. Otherwise we risk E.M. Forster’s scenario in the story ‘The Machine Stops’, where ‘Man was dying, strangled in the garments he had woven’.

Comments