It’s an interesting and unusual word, agentic. For a start, some language enthusiasts dislike it as a mulish crossbreed of Latin and Greek. Also, its etymology is obscure. It appears to derive from 20th-century psychology: one of its first usages can be found in a study of the infamous 1960s Milgram experiments at Yale University, when volunteers were persuaded to electrocute, with increasing and horrible severity, innocent ‘learners’ (actually actors). The experiment revealed that most of us would administer a lethal shock of electricity to an innocent human being, if only told to do so by a man in a white coat with a clipboard.
That battling lawyerly AI may go headfirst into a protracted squabble with your council, turning an £85 speeding ticket into a £3,000 legal bill
And if that sounds sinister, maybe that’s fitting, because the word agentic has been co-opted into the lexicon of artificial intelligence to describe a new form of AI that many will find ominous. Agentic is like ChatGPT4 or Gemini but it also has agency. It can act autonomously. It only needs the vaguest command or a list of intentions, and it will go off and complete relevant tasks, maybe online, maybe somewhere else – making independent decisions on the way. Agentic AI enables the bot to fully interact online as a human would. When tasked to sort a cheap holiday in Barcelona, these AIs won’t simply offer you a list of budget places near the Sagrada Familia; they will actually book a nice Airbnb. Ask agentic AI to ‘find a present for my husband’, and it won’t muse volubly on the various merits of aftershave versus cufflinks; it will take all its knowledge of your family and purchase a perfectly lovely fountain pen.
Such is the profitable potential for agentic AI to simplify our lives that multiple companies, many of them founded six minutes ago, are jumping on the train. From Aether to Bounti to Rabbit, they all promise to take away the hassle of human decision-making and hand it to the machines. Let the bots sort out that birthday party, from the catering to the venue; let the machine fight that parking ticket and gather all the tedious evidence. What’s not to love?
Well, one unlovable thing might be the tendency of AI to hallucinate – the polite term for those occasions when AI makes stuff up or deliberately gets things wrong, rather than do or say nothing. A hallucinating agentic AI may end up buying your husband a fountain rather than a fountain pen. That battling lawyerly AI may go headfirst into a protracted squabble with your council, turning an £85 speeding ticket into a £3,000 legal bill. Not so good.
Indeed, we already have an example of how agentic AI can roil the world in the form of Goatseus Maximus. What follows may feel like it is written in drunken Klingon, but here is the gist of this tale – as much as anyone understands it. Goatseus Maximus is a memecoin, a kind of cryptocurrency underpinned only by internet subculture in-jokes. The idea for this particular memecoin was invented by two AI model talking and joking with each other. The joke was released to the world by one of them, incarnated as an account on Twitter, and a human follower actually created the cryptocurrency. The agentic AI was lent $50,000 by an American software engineer and decided to buy up a load of these coins. It’s now the first AI millionaire that we know of. As I write, this ‘joke’ cryptocurrency has a market value of three-quarters of a billion dollars.
Alternatively, if you’re utterly confused by all this, rest assured you’re not alone – and that is one of the most unsettling aspects of cutting-edge AI, including agentic AI. These bots may get so clever, and autonomous, they will go off and do incredible, successful things, and we won’t understand why or how. And of course, they may also do profoundly destructive things. One place where agentic AI will be obviously powerful – and dangerous – is on the battlefield, because there its power and danger will be expressly required. Already in the Ukraine war we are witnessing the birth not just of AI-powered drones, guns, and vehicles, but the glimmering advent of agentic AI weaponry.
And this makes total sense: an armed AI drone-copter hovering over a Russian tank will gather data faster than any human observer, and if it definitely wants to take out that tank, it needs the capability to decide for itself when to shoot in the next three nanoseconds. Every moment lost by referring to human overseers will be a crucial advantage squandered, so the human element will logically be sacrificed to gain the military benefits.
The same logic goes up the military chain. Consider a general commanding armies in the field. The skill of generalling is knowing, through decades of training and experience, what troops are where, the strength of the enemy in different domains, how supply lines are functioning, and estimating what will happen if division A is moved to forest B. Most of it is applied maths combined with data collection and strategic forethought (Napoleon was brilliant at maths).
However, an AI general will clearly be superior (just as AIs are now much better than humans at those war games on a board – chess and Go). An AI general will be able to draw on near-infinite amounts of data, it will instantly grasp that there are 34,829 physically fit troops that can be moved in 4.7 hours to train station X; it will outclass human generals by orders of magnitude. Therefore, every state, every army, every military, will demand these AI warlords. And for these robot generals to succeed, we will have to make them agentic, so they can devise and execute tactics at mega-speed and thereby triumph.
But what if we task them to win a war, and they decide that to do that they must sacrifice half the citizens at home? Alternatively, what if the agentic AIs end war for good, at least for humans, as it becomes a theoretical battle between thinking machines? It could be wonderful, or it could be hellish, and the uneasy fact is we have no real idea which of these will happen – it is beyond the event horizon. One thing, however, is for sure: we’d better get used to that strange new word: agentic.
Comments