In the murky ether of the internet, social media companies do their best to silence automated bots, trolls and other ugly byproducts of the digital revolution — otherwise, the quality of their online communities suffers. It's pretty easy for companies like Twitter and Instagram to detect obvious spammy accounts, and also to automatically disable or lock the fakes. But what happens when social media giants quietly silence — or shadowban — certain accounts for no apparent reason?
Shadowbanning is, fittingly, a rather shadowy practice, the very existence of which has been debated for several years. In short, it refers to the idea of social media networks intentionally reducing the reach of specific users.
For instance, you might post a photo to Instagram using the same hashtags as always but see only a fraction of the engagement of prior posts. Maybe your images don't show up at all when you — or other users — search for your hashtags, or only your current followers can see those posts, meaning you're unable to reach potential new followers. Perhaps a search for your own username turns up blank.
Shadowbanning hit the headlines in the summer of 2018 when Vice News reported that Twitter's search box didn't autopopulate the names of prominent Republican Party members the same way it did for well-known Democrats. Combined with a widespread misperception that social media companies use a supposed liberal bias to control the information that's displayed in their communities, it's easy to see why people would be suspicious that shadowbanning is affecting the information they can — or can't — see.