Social Media

Here's How Facebook Determines What Hate Speech Looks Like

Facebook's internal policies protect certain groups of people from hate speech, but not others. And subsets of protected groups are fair game.

Here's How Facebook Determines What Hate Speech Looks Like
Getty Images
SMS

Facebook's content rules are under scrutiny again. A recent ProPublica report gives greater insight about how Facebook determines who to protect from hate speech on the platform.

The outlet obtained an internal presentation outlining Facebook's content rules. Basically, it protects some groups of people from harassment while leaving posts targeting other groups alone.

According to its policy, Facebook can remove posts that attack groups based on race, sex, gender identity, sexual orientation, religion, national origin, ethnicity and serious disability or disease.

Artificial Intelligence Is Facebook's New Terrorism Watchdog
Artificial Intelligence Is Facebook's New Terrorism Watchdog

Artificial Intelligence Is Facebook's New Terrorism Watchdog

Social media companies like Facebook want to make sure terrorism has no place on the internet.

LEARN MORE

Non-protected subsets of protected groups are fair game for targeting. One slide illustrated that a post attacking all white men should be taken down, while attacks on women drivers or black children are to be left up.

This is just the latest publication of Facebook's internal documents about content moderation, including how they dealt with tricky legal situations like Holocaust denial and online extremism.

Offensive content is increasingly putting Facebook and other social media giants at odds with governments around the world. A bill recently proposed in Germany would fine companies $53 million if they don't remove content quickly enough.

The problem is only going to get more complex as Facebook's user base grows — the site recently passed 2 billion monthly users.