Connor Leahy

DeepSeek shows the stakes for humanity couldn’t be higher

Credit: iStock

What is DeepSeek, the Chinese AI system that’s shaken the world, and what does it reveal about our future?

While DeepSeek has been around since 2023, what shocked the world was the release on 20 January of their DeepSeek-R1 AI model, a Large Language Model (LLM) that is just as intelligent as American giant OpenAI’s latest AI o1, but was far cheaper to create.

The increased efficiency comes from the artificial intelligence underlying R1. DeepSeek claims it only cost them a mere $6 million, while US companies like OpenAI and Anthropic have spent more than ten times as much to create comparably smart AIs. DeepSeek’s success is due to many small engineering innovations – finding ways to get the same bang for much less buck using new techniques to allow the AI to learn more efficiently from its training data.

As for the improved reasoning capabilities of R1, these come from the same method underlying OpenAI’s newly released o1 model: leveraging so-called ‘reinforcement learning’ to teach AIs to better reason about their answers. This method takes an existing AI, and asks it to solve maths and computer science problems by reasoning through them. Since the answer to these questions can be checked by a program, it’s possible to automatically tell the AI on whether it succeeded or not, and to reward it for reasoning in ways that lead to correct answers. 

This autonomous loop of improving its reasoning leads to R1 spending more time thinking before committing to an answer, and reaching what the DeepSeek researchers call ‘A-ha moments’: moments when R1 realises that it took the wrong approach to a problem, and then starts from scratch in a more fruitful direction. 

This is exactly what OpenAI has been doing with its o1 and o3 models, and what other competitors (Meta, Anthropic) are assumed to be doing in private. This is not a deep scientific breakthrough, but rather another clever engineering trick that can be adapted quickly. When combined with greater computing resources, these tricks give you increasingly intelligent AIs.

Viewed in this light, DeepSeek’s new AI is but the latest event in the current race to AGI (artificial general intelligence) – AI at least as smart as any human. It is worth remembering that these multi-billion dollar companies are not looking to just build better chatbots. Their ultimate goal is thinking machines better than any human at most tasks, that they call AGI or superintelligence. As OpenAI puts it, ‘highly autonomous systems that outperform humans at most economically valuable work’. This is why AI engineers are now hellbent on producing general reasoners, capable of thinking through problems and acting in the real world autonomously, not mere chatbots to replace Google search. The CEO of these corporations are all saying that AGI is coming soon, in a matter of years: from OpenAI and Anthropic to DeepSeek.

DeepSeek’s new AI also highlights a hard truth about racing to AGI: it doesn’t rely on any deep scientific breakthrough or secret sauce. Instead, it’s just a collection of engineering tricks that keep enabling more and more improvement to current AIs, with no wall in sight. The code underlying DeepSeek’s training methods is particularly simple, making it easily replicable by other teams around the world.

Right now, AI research is completely unregulated. As DeepSeek demonstrated, without restrictions any innovation made in the U.S. will find its way to other countries after only a few months, including rogue states and actors. So any talk of an AI ‘lead’ will be a fleeting illusion unless research on the most powerful AI systems is restricted. And without measures to stop AI proliferation, countries like North Korea will simply piggyback on innovation in the US or China, and massively scale their cyber and physical attacks on other countries.

The only way forward is to ban dangerous AGI development globally

Even worse, AI proliferation is particularly worrying because of the catastrophic risks that advanced AI poses. If anyone develops AGI, it will then be used to autonomously run AI research, leading to AIs vastly more competent than the whole of humanity. Were this to happen, humans would no longer be in control of our own future and subject to the whims of a greater intelligence.

This is why Nobel Prize winners, top AI scientists, and CEOs of leading AI companies warn that mitigating the risk of extinction from AI should be a global priority.

Without global measures in place, over time more and more AGI building efforts will spring up all around the world. And any AGI project going wrong will spell catastrophe for the whole of humanity: ‘lights out for all of us’, as OpenAI CEO Sam Altman famously admitted.

Rushing towards AGI is not the solution. No one on the planet knows how to contain and control autonomous AIs that are smarter than any human being: whoever reaches AGI first will soon lose control over its creation, and spell disaster for the whole of humanity. The only winner of an AI arms race is AGI itself: not the US, not China, and not any other government.

The only way forward is to ban dangerous AGI development globally: to broker an international deal to prohibit the development of superintelligence and put in place mutual monitoring, as we did for other high-risk technology like chemical weapons and human cloning.

President Trump is well positioned to strike such a deal. However, it will all depend on which AI faction comes out on top in the new White House. Whether those concerned about AGI, such as Elon Musk or David Sacks, win out against the Big Tech supporters of an all-out race towards machines smarter than humans might determine not only the future of US tech, but that of all of humanity.

Comments