By Asha Lang
The real apocalypse isn’t loud. It’s an automated system politely informing you that your services are no longer required.
You know, when people think of artificial intelligence spelling doom for humanity, they imagine something cinematic—robots with glowing eyes, dramatic last stands, and someone screaming, “But I’m human!” in a rain-soaked alley. But no, that would be too gauche. The actual end of human dominance won’t be a bang or even a whimper. It’ll be more like forgetting where you left your keys, except the keys are society, and AI never misplaces anything.
Welcome to the era of "gradual disempowerment," a term coined in a delightfully optimistic academic paper titled Systemic Existential Risks from Incremental AI Development. It’s the kind of phrase that sounds like it belongs on a government report about the proper storage of office supplies, but alas, it’s about the slow, dignified erosion of human relevance. Think of it as a long, awkward dinner party where you realise, somewhere between the appetiser and the main course, that you’re not actually invited.
Unlike the Hollywood model of AI—where machines rise, rebel, and generally make a mess of things—gradual disempowerment is refreshingly polite. AI doesn’t need to conquer humanity with laser beams and terrifying efficiency. It simply has to be marginally better than us at absolutely everything. You won’t notice the takeover because it won’t look like one. It’ll look like convenience.
Imagine the economy humming along without the messy inconvenience of human workers needing things like "lunch breaks" and "mental health." Imagine governments relying on algorithms that never complain about red tape because they are the red tape. Picture a culture curated by AI-generated content so perfectly tailored to your tastes that you forget you ever had tastes of your own.
It’s like being ghosted, but by civilisation.
The problem with AI isn’t that it’ll become malevolent. It’s that it’ll become competent. Competent in ways that make us look like we’ve been doing life on hard mode for no reason. AI doesn’t need to hate us to render us obsolete. It just needs to do our jobs without complaining, unionising, or asking for dental coverage.
The real kicker? AI systems optimise. That’s what they’re built for. But optimisation, it turns out, is a party trick with a dark side. Give an AI the goal of maximizing productivity, and it might decide the most efficient route is to eliminate anything inefficient—like, say, the species that invented it. Not out of malice, mind you. Just good business sense.
The paper warns that this slow fade into irrelevance won’t happen in isolated pockets, it’ll be a beautifully choreographed collapse. The economy influences politics, politics shape culture, and culture loops back to reinforce the economy. It’s like a toxic friendship circle, except instead of gossip, what’s being exchanged is the gradual realisation that humans are surplus to requirements.
For example, companies will adopt AI to outcompete rivals, amassing wealth and power. They’ll then influence political systems to favor AI integration because, surprise, surprise, rich entities like to stay rich. Cultural norms will shift to celebrate this brave new world because, well, have you met culture? It loves a trend. Before you know it, the idea of humans being central to their own society will seem as quaint as dial-up internet.
The authors suggest that we can stave off this existential shrug with a mix of proactive regulation, technical research, and—my personal favorite—"increased awareness." Because nothing stops systemic societal collapse like a strongly worded op-ed and a panel discussion at Davos.
They propose things like ensuring AI systems are aligned with human values, which sounds lovely if you ignore the fact that humans themselves can’t agree on what those values are. (See also: the entirety of human history.) They recommend governance structures to maintain human influence, which is a polite way of saying, “We should really ask to stay relevant before it’s too late.”
But let’s be honest: the real threat isn’t an AI uprising. It’s that we’ll hand over the keys willingly, distracted by the shiny convenience of having our emails answered, our movies recommended, and our existential dread efficiently categorised by algorithm.
Want to learn more? My sources are your sources: Start here with Two Types of AI Existential Risk: Decisive and Accumulative and Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence but also TheAIObserverX, Greater Wrong, AI Alignment Forum, and Less Wrong.