Where algorithms reign supreme, one might assume that the guardians of social media would have the good sense to protect their youngest and most vulnerable users. Unfortunately, as revealed by Digitalt Ansvar’s meticulous study, Instagram has failed spectacularly at this task—turning the promise of AI-powered safety into nothing short of a public relations mirage.
Over four weeks, Digitalt Ansvar crafted a small, chilling experiment: ten fake Instagram profiles, each portraying a user teetering on the edge of despair. These profiles shared 85 posts laden with self-harm imagery—razor blades, blood, and messages that might drive an already fragile mind deeper into the abyss. Yet, in a display of negligence that would make even the most cynical observer gasp, Instagram’s moderators removed exactly none of this content. Zero. Not a warning, not a whisper of concern, not even the faintest algorithmic nudge to suggest professional help.
Harmful Algorithms Don’t Protect, but Perpetuate
It gets worse. Instagram’s recommendation system—ever eager to keep users scrolling—encouraged the 13-year-old profiles in this grim experiment to connect with every other member of the fake self-harm network. Follow one, and the algorithm gleefully recommends others. In essence, Instagram is not merely hosting such content; it’s actively building pipelines to amplify it.
This is not a new revelation, though the sheer scale of indifference shocks anew. A Danish documentary back in 2020 laid bare how Instagram enabled dangerous self-harm communities to flourish, drawing young users into a world of spiraling harm. Meta, the parent company of Instagram, has long touted its AI tools as the panacea, claiming they swiftly remove harmful content. Yet Digitalt Ansvar’s findings starkly undermine these assertions. Not only can AI identify such material—it did so with chilling accuracy in independent tests—but Instagram’s refusal to act is no longer just negligence; it’s complicity.
Australia Says “No More”
This festering inaction has consequences. Last week, Australia became one of the first countries to institute a sweeping ban on social media for children under 16 without parental consent. Their argument? Social media platforms have become digital playgrounds without supervisors, where algorithms operate with cold, clinical efficiency—connecting the lonely and vulnerable in ways that sometimes lead to tragedy.
It’s a bold, but controversial move. Critics argue it’s too paternalistic, and that it hurts children from marginalise communities, restricting access in an increasingly digital world. Yet, as Instagram’s failures come to light, others ask ask: what other option remains when corporations refuse to regulate themselves?
Counterpoint: Perhaps governments might consider reining in the platforms' bad behavior directly, say critics.
Playing Fast and Loose with the Rules
Let’s not forget the EU’s Digital Services Act, which Meta is treating with the reverence of a loose Post-it note. When your own guidelines and international law point to "no self-harm content," and yet your actions point to "let the algorithm sort it out," it’s safe to say the responsibility gap has become a canyon. The CEO of Digitalt Ansvar, Ask Hesby Holm, summed it up with the kind of frankness one wishes Instagram’s moderators possessed: “Safety isn’t high on Meta’s to-do list.” I’d add that they’d probably need an algorithm to find that list in the first place.
The Final Swipe
Technology, we’re told, is meant to make life better, safer, more connected. Yet when the guardians of the digital realm fail to act, leaving vulnerable users to the whims of code, society must intervene. Instagram may be great for filters and brunch photos, but when it comes to protecting the young and fragile, it’s time to admit that the algorithm isn’t working.
And if that’s not cause for concern, well, I have a bridge to sell you—and no, it’s not on Instagram.