Tech

Why White Nationalist, Supremacist Content Is So Easy To Find Online

Tech companies learned how to crack down on ISIS and Al-Qaeda. They haven't been so successful with white nationalist and white supremacist content.

Why White Nationalist, Supremacist Content Is So Easy To Find Online
Newsy
SMS

Mark Zuckerberg: "We do not allow hate groups on Facebook overall."Jack Dorsey: "Over the past few months, we've taken a lot of actions to remove accounts en masse." Sundar Pichai: "We have policies against hate speech, and we clearly define them."Mark Zuckerberg: "If there's a group that their primary purpose or large part of what they do is spreading hate, we will ban them from the platform overall."

Earlier this month, a gunman killed 50 Muslim worshipers inside two mosques in New Zealand — and amplified his violent ideology online. Within a day, Facebook says it removed more than 1.5 million videos of the attack, but 300,000 videos still fell through the cracks.A YouTube spokesperson told The Guardian, "The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed — at times as fast as a new upload every second."

More than a week has passed since the attack, and both the gunman's video and manifesto are still spreading. In order to understand why, we need to talk about three big things: the proliferation of white nationalist and supremacist networks on the internet, the technology that exists to stop online extremism, and the normalization of Islamophobia.

"The infrastructure of white supremacist, white nationalist and other extremist ideologies online actually looks a lot like the infrastructure that you might be familiar with from just using social media in your normal life. These groups and individuals are going on to the same platforms we all use to try to bring followers into their movements, because they know they're already on there. And they know this is the place where you find a new audience. They're not interested in going into an echo chamber and trying to recruit people who already hold their belief systems. They're actually going to places to try to bring new members into these movements."

Keegan Hankes researches online far-right extremism for the Southern Poverty Law Center. Last year, he co-wrote a report on the violence of the so-called "alt-right" that touched on the self-radicalization of Dylann Roof, the man who killed nine black churchgoers in Charleston, South Carolina.In an online manifesto, Roof said a simple Google search of "Black on White crime" immediately led him to the white supremacist propaganda that, in turn, led him to carry out the shooting. Before that, he had "no real-world connections to known hate groups."

It can't be overstated how accessible these platforms are, considering how easily neo-Nazi, white nationalist and supremacist propaganda can make its way to the top of mainstream feeds. On Facebook, anti-Muslim groups are thriving, with some reaching more than 40,000 followers. On YouTube, neo-Nazi videos are constantly being uploaded, recommended and then kept on the platform for months or even years at a time. Even before the 2016 election, George Washington University's Program on Extremism found that white nationalists and supremacists on Twitter were more active and influential than members of ISIS.

"If you want to take that first step at stopping the spread of these ideologies, you start with organized hate groups."

Companies like Facebook have the technology to do this — and they're already using it for other extremist groups.Mark Zuckerberg: "We've developed a number of tools that have now made it so that 99 percent of the ISIS and Al-Qaeda content that we take down is identified by these systems and taken down before anyone in our system even flags it for us."

For years, Silicon Valley and law enforcement prioritized cracking down on ISIS and Al-Qaeda propaganda. Platforms like Facebook and YouTube invested in artificial intelligence to automatically scan and remove extremist content — to the point where the average user will find it almost impossible to come across online.These efforts haven't been made for white nationalist and supremacist content, but many experts say there's a reason for that.

"A lot of ISIS content is obvious about what it is. There are common stylistic elements in the propaganda, there are specific propagandists who might be featured, who want to identify. But I mean, comparatively, there are a lot of different white supremacist groups and even movements, so not all their content is going to be the same. And that, in many ways, that can be seen as kind of a very different kind of problem." 

Joshua Fisher Birch, a content review specialist for the non-partisan Counter Extremism Project, says content from white nationalists and supremacists won't always explicitly fall under sites' definitions of hate speech or extremism — oftentimes because it's steeped in irony and obscurity through memes. To evade removals, well-known members of the movement also argue for freedom of speech and criticize tech companies for "censoring" their content.

"I think part of this also has to do with the fact that these tech companies are businesses, and they want people to spend a lot of time on their sites, they want to appeal to a wide variety of users, and they want people to use their services."

That could explain why anti-Muslim propaganda from white nationalists and supremacists proliferates so much more easily than some other forms of extremist content. Even if it violates a site's terms of service, Islamophobic posts are seen as more mainstream and normalized. They can lead to heavy user engagement and, ultimately, profits for tech companies. Online and off, the Center for American Progress says Islamophobia is fueled by a $57 million network of foundations, trusts and individual wealthy donors. 

"It's a complicated ecosystem, when you're talking about spread of this propaganda. It definitely has an online component and an offline component. I think one of the biggest drivers of white supremacy in this current day is, of course, anxiety over demographic change. And this is something that you can see reflected even in the New Zealand perpetrator's manifesto.""Those anxieties that one might notice, one might get exposed to offline, are nurtured online. So we really see this feedback loop in the system that feeds each side when it comes to both online and offline materials."

Following the New Zealand mosque shootings, the House Judiciary Committee is reportedly planning a hearing on the rise of white nationalism in the U.S. The hearing could happen as soon as April, and it's expected to bring in officials from the Department of Homeland Security and the FBI.

"It's up to the public. It's up to advocacy groups and research groups, NGOs, and it's up to governments as well to put this pressure on tech companies to really get this content off of their platforms, and to really work against its spread and to be proactive."

Additional reporting by Reuters.