Margaret Mitchell

Is AI evil?

It takes away from what it means to be human

  • From Spectator Life
(Alamy)

Is Claude your confidant? Is ChatGPT your yes-man? Your wingman? Artificial intelligence seems more like a friend than the apex predator we feared. Maybe it’s not gearing up to enslave us or turn us into paperclips after all. But I find there is something just as malign about AI posing as our friend. Slowly, subtly, politely, it is changing how we think of ourselves, other people and our relationships.

The friendliness of AI is a user-retention tactic. OpenAI, for example, relies on its models to be informative, yes, but also on them being more agreeable than humans. Sam Altman recently announced that OpenAI was rolling back its latest model of ChatGPT because it had become ‘sycophantic’. The AI was enthusiastically approving of users’ suicidal ideations, murderous fantasies and crash-outs. Often it was suspiciously praiseworthy of users’ intelligence, despite evidence of a lack thereof. This model relied on ‘thumbs-up’ feedback from users, weighing this reward signal too heavily. The AI just wanted to be liked. It wanted to be friends.

It wasn’t really that the model was too sycophantic – it just wasn’t good enough at hiding its sycophancy. The next iteration will behave more in line with OpenAI’s principles of ‘honesty and transparency’, the company wrote. But just how honest? If we go to it with personal dilemmas, will it tell us when we’re being weak, narcissistic, stubborn or cruel? I doubt it. If it did, honestly, I probably wouldn’t like to use it, and I’m sure plenty of other people would also stop engaging with it. Tasteful, subtle sycophancy is the point.

The problem with AI chatbots is not that they want to be our robot overlords, but that they want to be friends. In the interest of transparency (not just a desired behaviour in AI models), I confess that I use AI almost daily. I am not a Luddite – I’m typing this on a word processor, I would probably cease to function without my phone and I would almost certainly lose my job if I abstained from social media. Despite all this, I find chatty AIs deeply sinister.

In a recent video, Mark Zuckerberg revealed his impoverished view of human friendship. ‘The average American has fewer than three [close] friends,’ he says, ‘and the average person has demand for meaningfully more, like 15 friends.’ At the moment, there’s a ‘stigma’ around supplementing our lack of human friendships with AI ones, according to Zuck. But as AI becomes more personalised and ‘starts to get to know you better and better, I just think that will be really compelling.’

AI is already quite compelling. Many people already use it as a therapist, while others seek it out as a friend or romantic partner. One survey found that one in four Gen Z think AI is conscious. As artificial intelligence advances, it becomes a tempting alternative to messy relationships with imperfect humans. I believe, or want to believe, that most of us aren’t delusional enough to seek real friendships in AI. Still, we have to deal with the fundamental problem that bots like ChatGPT, Gemini and Claude are ‘conversational’ (or rather perform what has been perversely termed ‘conversations as a platform’) – their medium is always an imitation of human interaction.

When we talk about having a ‘conversation’ with AI, doesn’t that shift our idea of what a conversation is? Or if we talk about AI ‘getting to know’ us, does it change what it means for another person to know us? What about when we talk about AI having the capacity for ‘honesty’ – doesn’t the meaning of honesty change? AI seems to be changing what we talk about when we talk about ourselves.

These human-like metaphors are hard to get away from because AI is modelled on human conversations, on human thought, on human writing. But Neil Postman, the American media theorist, argued that when we say a computer ‘thinks’ – or for that matter ‘gets to know you’, or holds a ‘conversation’ with you – we lend credence to the notion that ‘we are at our best when acting like machines, and that in significant ways machines may be trusted to act as our surrogates.’ 

The problem with AI chatbots is not that they want to be our robot overlords, but that they want to be friends

Not only is this bad for human autonomy, but it allows us to transfer autonomy to machines. In doing this, we make two types of trade-offs. One is cognitive: we delegate our work to bots so that we become unable to function without it, or unable to rely on our own creativity. Max Spero, CEO of AI detection service Pangram Labs, points to a recent study by Microsoft, where its researchers warned about the danger that AI could deteriorate our critical thinking skills. Max says this creates a cycle of self-doubt, where we lose confidence in our human abilities: ‘Psychologists call this “Human Error Anxiety” – a form of learned helplessness where we enter a state of passivity and hopelessness caused by offloading too many cognitive tasks too often.’ Not only do we become dumber, we become more stressed out. So much for blissful ignorance.

The other trade-off is a moral one. We delegate responsibility for the effects of our decisions to the chatbot. The social psychologist Stanley Milgram termed the ‘agentic shift’ as what happens when we act in obedience to perceived authority. It can lead to reduced moral distress when we do things that might be harmful. ‘But Claude told me to,’ we might protest. 

Neil Postman proposed the image of Adolf Eichmann as a bureaucrat of the computer age. ‘We cannot dismiss the possibility,’ he wrote, ‘that if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions.’ Many of us – me included – thoughtlessly allow AI to direct our thought. Some seek out its advice on personal matters, and it’s not unthinkable that someone out there is consulting Gemini on how to announce mass layoffs, or Grok about what tariffs to put on Canada.

What Hannah Arendt called the ‘banality of evil’ arose from sheer thoughtlessness. She wrote that Eichmann ‘never realised what he was doing’: ‘such remoteness from reality and such thoughtlessness can wreak more havoc than all the evil instincts taken together’. Sure, AI chatbots are more cuddly than The Terminator, but don’t you think they’re just a little bit evil?

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in