Social Media

Deplatforming Helps Social Media Catch Up To Hate Speech

After years of mismanagement, social media companies are retroactively trying to get rid of hate speech. It's difficult, but it can be effective.

Deplatforming Helps Social Media Catch Up To Hate Speech
Getty Images / Drew Angerer
SMS

The Poway and Christchurch shootings had something in common: both shooters spent notable time on the same message board where they each shared manifestos prior to the attacks. These shootings led to more calls for regulation of hate speech in online spaces. However, there's still an open question: can patrolling hate speech be made more than just a game of whack-a-mole?

We've seen that deplatforming can control hate speech. At its most basic, deplatforming censors or outright removes people from online spaces where they spread hate speech or misinformation. When it works, deplatforming can drive so much traffic away from these spots that they become empty, and lead to lost revenue for organizers.

When Alex Jones and Infowars were banned from Facebook and YouTube, video views and traffic to the Infowars website fell by half. Far-right activist Milo Yiannopoulos claims he's been banned from so many platforms that it's contributed to his millions of dollars of debt. Deplatforming even works for large communities: Researchers at Georgia Tech found when Reddit closed notorious hate-speech subreddits, the users there posted 80 percent less hate speech elsewhere on the site.

One reason hate speech became such a problem is because it had time to catch on. According to Jessica J. González, Vice President of Strategy and Senior Counsel at Free Press, the largest social media companies failed to use proper enforcement on hate speech in the early stages of their companies.

"A lot of the companies have decent anti-hate policies, and what’s really lacking is the enforcement," González said. "I feel like enforcement is being tacked on at the end and not being thought through as thoroughly as it needs to be. Twitter at its infancy did not want to have any policy on hate. It has one now, it’s not doing a great job of enforcing it, but it has something."

González is working with the Southern Poverty Law Center and several other organizations to change this. They've created 'Change the Terms' — a set of corporate policies to help internet companies stop hate online. She says one way that deplatforming can be improved is by providing the public with an "open book" about what companies take down, and why.

"People of color protesting racism are often taken down — while white supremacists engaging in racism are not. And we need some transparency so we can hold these corporations accountable on things like that. We need data so that it’s not just left to the companies or a few public watchdogs to decide."

Next up could be to take deplatforming a step further: to the registrars, network providers and payment processors that help keep websites running. Look at the case of Gab, which pitches itself as mostly censor-free Twitter alternative. After the Pittsburgh synagogue shooting, Gab's web host took down the site when it learned that the shooter was an active Gab user. The site stayed down until it found new providers who agreed to host its content.