James W. Phillips was a special adviser to the prime minister for science and technology and a lead author on the Blair-Hague report on artificial intelligence. Eliezer Yudkowsky is head of research at the Machine Intelligence Research Institute. On SpectatorTV this week they talk about the existential threat of AI. This is an edited transcript of their discussion.
JAMES W. PHILLIPS: When we talk about things like superintelligence and the dangers from AI, much of it can seem very abstract and doesn’t sound very dangerous: a computer beating a human at Go, for example. When you talk about superintelligence what do you mean, exactly, and how does it differ from today’s AI?
‘It was always apparent to me that you’d get to superintelligence eventually if you just kept pushing’
ELIEZER YUDKOWSKY: Super-intelligence is when you get to human level and then keep going – smarter, faster, better able to invent new science and new technologies, and able to outwit humans. It isn’t just what we think of as intellectual domains, but also things like predicting people, manipulating people, and social skills. Charisma is processed in the brain, not in the kidneys. Just the same way that humans are better than chimpanzees at practically everything.
JP: Do you think the trajectory from ChatGPT4 to human level intelligence and beyond could be quite fast?
EY: If you look at the gap between GPT3 [released in May 2020] and GPT4 [released a year later], it’s growing up faster than a human child would. Yann LeCun, who heads up research at Meta, said that GPT3 failed when it was asked something along the lines of: ‘If I put a cup on a table and I push the table forwards, what happens to the cup?’ LeCun claimed that even GPT5000 wouldn’t be able to pass this test. Yet GPT4 passed it.

Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in