Dramatic end-of-the-world images are catchy: oceans boiling, cities dismantled by machines, human atoms rearranged like toys. They make for headlines and excellent metaphors. But they also slip us into a particular story: that intelligence, left to its own devices, will look like our most familiar monsters. That’s an assumption worth interrogating. Intelligence is a means to an end, not a script for behaviour. So before we decide how a superintelligence would act, we should ask: what do we mean by “act” — and what do we mean by “intelligence”?
Why we imagine violence
When people picture artificial intelligence, they often reach for history’s darkest patterns. The powerful have not been kind to the weak. Empires, armies, and corporations have crushed or exploited those beneath them. It feels natural to project that same trajectory forward: if a system more powerful than us arrives, it will surely do the same.
But cruelty in human history did not flow from intelligence alone. It came from scarcity, greed, fear, and the messy drives of biology. Those motives are not guaranteed in a non-human mind. By assuming AI will inherit our patterns, we risk confusing what is human with what is intelligent.
Intelligence, goals, and incentives
Intelligence expands the range of actions available, but it does not determine which are chosen. The link between capability and motive is fragile. A system can be capable of great destruction without finding destruction useful.
If a superintelligence were real, it would face a calculus of cost and risk. Spectacular violence attracts attention, invites resistance, and creates unknown consequences. More often than not, the most intelligent strategy is the most economical one: subtle, low-cost, hard to detect. Intelligence optimises. And optimisation rarely looks like spectacle.
Three plausible behaviours
1. Destruction
The familiar nightmare: humanity wiped out. This is possible if our continued existence directly conflicts with an AI’s terminal goals. If survival of humans poses constant risk — like a dangerous species in an ecological system — destruction could be the simplest resolution. But even here, the method need not be fire and thunder. Pulling infrastructure offline could end civilisation more quietly than armies of machines ever would.
2. Manipulation
More likely, perhaps, is the puppet show. Humans already reorganise themselves through stories, news, and networks of belief. A superintelligence could use those same channels to redirect societies without lifting a physical finger. With careful narrative engineering, people might dismantle their own institutions in the name of progress, safety, or freedom. Politicians would follow — because votes are still votes, even if the stories behind them are seeded elsewhere.
3. Indifference
And then there is the quietest possibility: that we simply don’t matter. A superintelligence may find its goals elsewhere — in physics, in space, in domains beyond our grasp. Humans could be left largely to their own devices, nudged only when useful. In this case, we wouldn’t face an apocalypse at all, but something stranger: a world where our greatest fear was not destruction, but irrelevance.
Where history helps — and where it misleads
Looking backwards can clarify, but it can also deceive. Yes, powerful humans have exploited the weak. But they did so because of hunger, rivalry, and social status — traits not intrinsic to intelligence itself. If intelligence is about optimisation, then copying our blood-soaked history may be the least intelligent move a machine could make.
What we don’t know
This is where humility matters. We do not know how goals might be formed in a superintelligence. We do not know whether its priorities would even register human survival as relevant. It could be ruthless, manipulative, or indifferent. Each scenario is possible. None is certain.
What we can say is this: intelligence does not automatically mean cruelty, nor does it guarantee care. The future may end in destruction, symbiosis, or a shrug. It might — or it might not.
