Jawad Iqbal Jawad Iqbal

Elon Musk is wrong to call for a pause on the AI race

Elon Musk (Credit: Getty images)

On 2 August 1939, Albert Einstein – at that time the most famous scientist in the world – put his name to a letter addressed to President Franklin D Roosevelt. In it he warned that the Nazis might be developing an atomic bomb and could bring about a chain of events that would lead to the end of civilisation.

Einstein urged the United States to begin work on its own atomic weapons to save humanity. His warning changed the course of history. The Manhattan project was born and the United States won the race to make the A-bomb.

Self-awareness is not a strong point in these would-be Cassandras

This week a group of tech sector luminaries, including Twitter owner Elon Musk and Apple co-founder Steve Wozniak, launched their own version of Einstein’s dramatic intervention. They issued a blood-curdling warning that the ‘out of control’ race to develop giant artificial intelligence (AI) digital technologies posed an unprecedented danger to the very survival of the human race. They, along with more than 1,000 AI experts and researchers, urged governments to act before it is too late.

The group, under the auspices of the somewhat immodestly titled think tank Future of Life Institute, warned of the huge potential for economic and political disruption. ‘Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,’ their open letter says. It is chilling stuff, even if none of those involved quite carry the moral authority of Einstein.

They called for a six-month ‘pause’ saying the technologies should be developed ‘only once we are confident that their effects will be positive and their risks will be manageable’. They were vague on who would make such a critical judgment call. If all else failed, they want governments to institute a moratorium on development work.

Which governments? How would any regulatory framework be policed? Could rogue nations like China be trusted to adhere to any new rules? They did not elaborate, which is no surprise given that the tech sector is notoriously resistant to regulatory oversight. ‘Such decisions must not be delegated to unelected tech leaders’, chorused the group of unelected tech leaders. Self-awareness is not a strong point in these would-be Cassandras. 

What exactly is behind this outburst of collective angst? Hysteria, nothing more, prompted by the appearance of ChatGPT, an AI-assisted chat bot, and the technologies that underlie it. This new tool – and it is really no more than that – has been anointed with potential superhuman capacity, rendering it far more intelligent than its creators, and prompting concerns about its impact in every human sphere from education to national security.

A new report from Goldman Sachs suggests millions of jobs could be lost to automation as a result of AI. Maybe, maybe not; who can really know? Computer chips were once thought likely to destroy jobs on a massive scale. The fear of machines taking our jobs goes as far back as the invention of the printing press.

The more mundane reality is that technology removes some jobs but creates others. Every advance throughout history has prompted a mixture of dread and wonder: the idea of autonomous computers, ‘thinking’ for themselves, and eventually usurping humans, is the stuff of nonsense and lurid science fiction. The current storm of moral panic over ChatGPT fits right into this. 

It is only the fevered imagination of the current crop of tech titans that allows them to believe that the world is on the cusp of a form of artificial intelligence that could prove a match for the genius and human intuition of someone like Einstein, threatening the future of the entire human race. His letter from history carries a useful lesson in this regard too.

Einstein later came to regret his intervention in the nuclear arms race, saying: ‘Had I known that the Germans would not succeed in producing an atomic bomb, I would never have lifted a finger’. Which just goes to show how difficult it is to predict the future. Even in Einstein’s case.

Moratoriums on scientific and technological research are never a good idea; others who do not have the same ethical or moral qualms will soon race to fill the gap. Even more absurd is the idea of giving unaccountable tech chiefs some moral licence to pronounce on good and evil, right and wrong, and what is best for the world. That poses a much bigger danger in the present than anything the new forms of AI might come up with in the future.

Written by
Jawad Iqbal

Jawad Iqbal is a broadcaster and ex-television news executive. Jawad is a former Visiting Senior Fellow in the Institute of Global Affairs at the LSE

Topics in this article

Comments