Social Media

Independent Research Could Help Social Media Counter More Hate Speech

Most social media companies report how effective their content management is, but experts think it could be more transparent.

Independent Research Could Help Social Media Counter More Hate Speech
Getty Images
SMS

The Steven Crowder YouTube harassment incident has highlighted a long-standing issue with YouTube and with other social media platforms: it's not always clear how they enforce their own rules and regulations. To their credit, social media giants are giving the public more insight into their fights against hate speech, but experts think there's still a missing piece that could help: robust independent research.

Social media transparency reports will already show you the scale of the problem. Facebook removed more than 2.2 billion fake accounts in Q1 of 2019 alone. Google, the parent company of YouTube, reported taking down roughly 2.8 million accounts between January and March of 2019, mostly for spam or scams. Twitter's latest report says it took action on over 600,000 accounts in the second half of 2018 for rules violations. 

According to Henry Fernandez, Senior Fellow at the Center for American Progress and member of of the Change The Terms Coalition, today's transparency reports not only help the public understand how enforcement goes on behind the scenes, but also show watchdogs exactly where their efforts could be most useful. 

"In Google's transparency report for YouTube, we learned that well over 90% of content is flagged by automated systems, and not by individual users," Fernandez told Newsy. "Up until that point, for many of the civil and human rights organizations working on these issues, they were focused on how to build a more robust flagging operation, and how to make sure that flagging operation got the correct amount of attention. But it turns out that’s a relatively small amount of the actual flagging that occurs. While it remains important, now that we have access to that transparent data around the real significant role that automation plays in removing content, we can spend more time focused on where the game is actually being played."

Fernandez notes there's a big caveat with that data, however. It's all self-reported. 

"So right now we have to rely on what companies tell us. But if there was robust transparency on data about hateful activities, entire university departments could be built just to focus on answering what works for keeping internet platforms free of hateful activities."

Academia is already studying the problem — and we've seen that an open platform gives good results. In 2017, researchers from Georgia Tech scoured Reddit. They were able to easily scan hundreds of millions of posts and comments to see how well the site's hate speech enforcement policy worked. At the time, Reddit's code was open source. 

"From that, we’ve learned things like that when Reddit closed down two problematic, hateful Subreddits, the overwhelming majority of people either left Reddit entirely or stayed on reddit and no longer engaged in hateful activities or hateful speech when they went to other Subreddits. If we didn’t have that, we wouldn’t know something that is very useful in terms of how you can actually curb hate, not just on Reddit but on other platforms."