After the recent wave of terrorist attacks affecting Europe, Facebook decided to step in and combat terrorist propaganda on social media. The tech giant now uses AI to detect and remove online terrorist content.
Facebook has never revealed its behind-the-scenes work publicly before, but the recent terrorists events pushed the company to take a public stance against this phenomenon. As a quick reminder, many politicians accused social media platforms of providing terrorists with the right tools to develop and carry on their evil plans.
In an official blog post, Facebook made it clear once again that one of its main missions is to keep the community safe and prevent users from posting extremist content.
Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role — and we don’t want Facebook to be used for any terrorist activity whatsoever.
Here’s how Facebook uses AI to deter and remove terrorist propaganda
Facebook recently started using AI to stop the spread of terrorist content. Here are some the technical solutions that the company deployed to detect and remove terrorist content before people in the community have seen it.
- Image matching
When someone tries to upload a terrorist photo or video, AI systems compare that content to known terrorism photos or videos. In other words, if Facebook previously removed a propaganda photo or video, the same image or video can’t be uploaded to the network again.
- Language understanding
Facebook uses AI to understand text that might be spreading terrorism ideas. This algorithm is in the early stages of learning how to detect similar posts. The good news is that machine learning algorithms work on a feedback loop and will get better over time.
- Removing terrorist clusters
AI algorithms identify pages, groups, posts or profiles as supporting terrorism using various signals. For example, if an account is friends with accounts that have been disabled for terrorism, the AI flags the respective account as suspicious.
AI helps Facebook detect and remove new fake accounts created by repeat offenders. In this manner, the company dramatically reduces the time period that terrorist recidivist accounts are visible on Facebook.
- Cross-platform collaboration
Facebook uses data collected via all its platforms, including WhatsApp and Instagram, to detect and remove terrorism content.
Let us not forget that nearly 2 billion people use Facebook every month, posting and commenting in tens of languages. Keeping all these users safe is a huge challenge.
For this reason, Facebook values user feedback and relies on it to improve its services. Should you ever come across terrorist content on the social media network, report the respective posts and accounts to Facebook as soon as possible.Follow The AI Center on social media:
Join me as I track the latest progress in AI research.
Latest posts by Maddie Blau (see all)
- True Emoji is an AI app that uses your expressions to create animated emojis - November 21, 2017
- Canada’s first AI exchange-traded fund enters the market - November 2, 2017
- Are you curious to see who’s the smartest AI in the world? - November 1, 2017