After a string of terror attacks over the past couple years, some social media companies are increasing their counterterrorism efforts.
Facebook announced Thursday it will start using artificial intelligence to target and remove posts that support extremism and terrorism from its website.
The company said: "We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities — to get better at spotting the early signals before it's too late."
The website's use of AI will automatically detect extremist propaganda, language and groups of terrorist accounts. Facebook hopes its AI will learn how to identify and shut down extremist images and phrases.
Facebook does still plan to use humans to fight terror — the company pointed to its in-house counterterrorism specialists and growing content reviewing teams as evidence.
Back in December, Facebook teamed up with Microsoft, Twitter and YouTube to create a database for terrorist propaganda and recruitment. This database allows all four companies to quickly identify and take down extremist content.
From the end of 2015 through 2016, Twitter also suspended over 600,000 accounts for promoting terrorism.
Despite these efforts, terrorist groups are rapidly trying to find alternatives to recruit and communicate.
Experts say that jihadi terrorists are using Telegram to communicate because the app not only allows encrypted messaging, it also includes secret chat rooms where extremist groups can spread their messages.
Soon after this came to light, Telegram created an "ISIS Watch" channel where users report ISIS communications. After the channel was created, around 8,000 ISIS bots and channels were shut down within four months.
Some experts worry Facebook's new policies won't do enough to actually fight terrorist ideology. There's also the question of whether Facebook should target white nationalist and neo-Nazi movements as well as groups like ISIS and Al-Qaeda.