Not all risks are equal — but which ones deserve our attention first?
Artificial intelligence introduces risks across a wide spectrum. Some are familiar and already here: biased decisions, misinformation, job displacement. Others are speculative and future-oriented: autonomous weapons, runaway systems, even the idea of superintelligence. The challenge is not just identifying risks, but deciding which ones to prioritise — and who gets to decide.
Everyday Harms
- Bias and discrimination: reinforcing patterns of inequality in hiring, lending, policing, and beyond.
- Misinformation: accelerating the spread of false or misleading content, often at scale.
- Labour disruption: replacing human work without adequate planning for retraining or support.
These harms are not hypothetical — they are already shaping people’s opportunities, rights, and livelihoods.
Medium-Range Risks
- Disinformation campaigns: coordinated use of AI to manipulate public opinion or elections.
- Weaponisation: autonomous drones, surveillance systems, and cyber tools deployed in conflict.
- Systemic dependence: critical infrastructures relying on AI in ways that may create fragile points of failure.
These risks magnify over time, as reliance deepens and systems scale.
Existential Threats
At the furthest end of the spectrum lie debates about artificial general intelligence (AGI) and superintelligence. Some fear a loss of human control or even extinction. Others dismiss these scenarios as distant or speculative. Either way, existential risk discussions capture disproportionate attention compared to the everyday harms already unfolding.
Risk Distraction
The politics of risk are complex. Some companies amplify existential concerns, positioning themselves as essential guardians of humanity’s future. Critics argue this distracts from holding them accountable for the concrete harms happening now. In focusing on tomorrow’s apocalypse, we risk neglecting today’s injustices.
Why It Matters
Risk governance is not only about technical safeguards. It is about priority-setting — whose safety counts, whose future is protected, and whose present is overlooked. To govern AI responsibly, we must face risks across the spectrum, without letting distant threats eclipse the harms already shaping lives.
