Brendan McCord

How can we develop AI that helps, rather than harms, people?

(Getty images)

In every technological revolution, we face a choice: build for freedom or watch as others build for control. With AI the stakes couldn’t be higher. It already mediates 20 per cent of our waking hours through smartphones, automated systems, and digital interfaces. Soon it will touch nearly every aspect of human existence. While AI promises to liberate us for higher pursuits by “extending the number of important operations which we can perform without thinking,” history – from the iron cage of Soviet bureaucracy to modern Chinese surveillance – serves as a stark warning that automation can just as easily erode our freedoms and condition us to passively accept social control.

AI threatens to become an “autocomplete for life,” offering pre-packaged responses that slowly transform us into passive and dependent sheep

Today’s debate about AI’s future is dominated by competing visions of control. Doomsayers, like some of those at this week’s AI Action Summit in France, advocate for strict controls (even “pauses” on all development) that would forfeit progress while inviting tyranny. Accelerationists push toward AI supremacy without consideration for human flourishing. Regulators respond to every challenge with rules that stifle innovation and liberty, as seen in the EU’s AI Act with its complex compliance requirements. Meanwhile, techno-authoritarians like the Chinese Communist Party demonstrate AI’s potential for automated authoritarianism.

Even in democracies, we risk sleep-walking into centralisation. A handful of winners could consolidate AI development through regulatory capture. Whether driven by safety concerns or equality goals, these paths lead toward concentrated power, transnational oversight, and a decline of dynamic experimentation.

On the individual level, AI threatens to become an “autocomplete for life,” offering pre-packaged responses that slowly transform us into passive and dependent sheep: mere non-player characters ripe for exploitation via algorithmic “soft despotism.”

Fundamentally, we think that the goods of human life – friendship, family, wisdom, creative endeavour – are best pursued as the aims of self-motivated striving. To activate and realise our potential, we need to volitionally explore the world: to encounter it, experience it, enjoy it and get hurt by it. We need to discover the ideas, patterns, and habits that allow us to flourish, and to have the space to experiment with, discuss, and transmit what we learn.

The Western tradition isn’t a monolith to be automated from above. Its vitality flows from countless “experiments in living” informed by competing archetypes – the valour of the hero-soldier, the sturdiness of the farmer-gentleman, the devotion of the pious believer, the creativity of the maker-builder, and the insight of the contemplative philosopher. These ways of life don’t harmonise neatly. They’re not meant to.

In a new paper, published today for the Alliance for Responsible Citizenship, I make a simple argument: To preserve human freedom in the age of AI, we must resist its gravitational pull toward centralisation.

We need bold new approaches: innovative research, decentralised prototypes, and legal frameworks that reinvigorate the ideals of dynamic, decentralised liberalism. Individually, this means putting the active use of human freedom front and centre. It also means designing, governing, and using AI in ways that amplify, not attenuate, the creative powers of a free civilisation.

The Soviet system constrained human potential for 70 years; AI built or leveraged for control could constrain it for centuries. The choice is ours.

Written by
Brendan McCord

Brendan McCord is founder and Chair of the Cosmos Institute, an academy developing philosopher-builders to develop AI that benefits people

Topics in this article

Comments