Artificial Intelligence

Artificial Intelligence Is Now Used To Track Down Hate Speech

Social media companies are now using artificial intelligence to detect hate speech online.

Artificial Intelligence Is Now Used To Track Down Hate Speech
Wilfredo Lee / AP
SMS

Throughout the last decade, the U.S. has seen immense growth in frequent internet usage, as one-third of Americans say they're online constantly, while nine out of ten say they surf the web several times a week — according to a March 2021 Pew Research poll. That immense surge in activity has helped people stay more connected to one another, but it's also allowed for the widespread proliferation and exposure of hate speech. One fix that social media companies and other online networks have relied on is artificial intelligence - to varying degrees of success. 

For companies with giant user bases, like Meta, artificial intelligence is a key, if not necessary tool for detecting hate speech — as there are too many users and pieces of violative content to be reviewed by the thousands of human content moderators already employed by the company. AI can help alleviate that burden by scaling up or down to fill in those gaps based on new influxes of users. 

Facebook, for instance, has seen massive growth - from 400 million users in the early 2010s, to more than two billion by the end of the decade. Between January and March 2022, Meta took action on more than 15 million pieces of hate speech content on Facebook. Roughly 95% of that was detected proactively by Facebook with the help of AI. 

That combination of AI and human moderators can still let huge misinformation themes fall through the cracks. Paul Barrett, deputy director of NYU's Stern Center for Human Rights, found that every day, 3 million Facebook posts are flagged for review by 15,000 Facebook content moderators. The ratio of moderators to users is one to 160,000.  

"If you have a volume of that nature, those humans, those people are going to have an enormous burden of making decisions on hundreds of discrete items each work day," Barrett said. 

Another issue: AI detected to root out hate speech is primarily trained by text and still images. This means that video content, especially if it's live, is much more difficult to automatically detect as possible hate speech.   

Zeve Sanderson is the founding executive director of NYU's Center for Social Media and Politics. 

"Live video is incredibly difficult to moderate because it's live you know, we've seen this unfortunately recently with some tragic shootings where, you know, people have used live video in order to spread, you know, sort of content related to that. And even though actually platforms have been relatively quick to respond to that, we've seen copies of those videos spread. So it's not just the original video, but also the ability to just sort of to record it and then share it in other forms. So, so live is extraordinarily challenging," Sanderson said.  

And, many AI systems are not robust enough to be able to detect that hate speech in real time. Extremism researcher Linda Schiegl told Newsy that this has become a problem in online multiplayer games where players can use voice chat to spread hateful ideologies or thoughts. 

"It's really difficult for automatic detection to pick stuff up because if you're you're talking about weapons or you're talking about sort of how are we going to, I don't know, a take on this school or whatever it could be in the game. And so artificial intelligence or automatic detection is really difficult in gaming spaces. And so it would have to be something that is more sophisticated than that or done by hand, which is really difficult, I think, even for these companies," Schiegl said.