Facebook Wants to Know If Your Friends Are Extremists

Facebook is toying with a new service that includes letting users report their friends for perceived extremist activity

Facebook is testing a new feature that will allow users to report other users for extremist activity. It's part of an effort to limit the amount of hate speech and other harmful information that the platform has become associated with in recent history.

Facebook has received plenty of bad press over the past few years due to their inactivity when faced with extremist content, with many sources citing them as a core reason for the rabid political discourse and division in the USA.

While social media sites used to be able to hide behind Section 230, a law that stated websites were not responsible for the posts of their users, a recent amendment in 2018 has stated that any human rights violations associated with the website are the responsibility of the site itself, leading to companies cracking down on these instances.

What Is Facebook Testing?

Facebook's new anti-extremist system looks like it's going to take two different forms of approach. Firstly, Facebook will let you know if it detects you being exposed to any kind of extremist material.

Secondly, Facebook may actually ask their users to report people that they think may fall under the extremist umbrella.

When unveiling these new features, a Facebook spokesperson had this to say:

“This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk.”

While the detection of extremist content would be difficult through use of an algorithm, due to how many complex forms extremist rhetoric can manifest itself, relying on users will also have its own downfalls, as users with extremist friends are likely to share similar views in the first place.

What Could This Mean for the Future of Facebook?

While both of these new detection systems are in their infancy, users are already having misgivings about the potential usages of them. Since part of Facebook's strategy relies on user reports, people are already predicting that it will be open to abuse, as what one person views to be extremist may just amount to another opinion.

Putting aside the potential for misuse, most users agree that these systems don't go far enough to make an impact on the core issues at hand. Facebook's poor efforts to date are mentioned frequently as a key reason so much misinformation and hateful discourse is spread across the internet, and these updates seem unlikely to turn the tide.

While this is, on paper, a step in the right direction for Facebook, monitoring the accuracy of harmful content detection from users and automated systems, and ensuring that adequate follow up action is carried out, will need to be developed significantly before this move can really serve the function it needs to.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Duncan (BA in English Textual Studies and Game Design) is an Australian-born senior writer for Tech.Co. His articles focus on website builders, and business software that allows small businesses to improve their efficiency or reach, with an emphasis on digital marketing or accounting. He has written for Website Builder Expert and MarTech Series, and has been featured in Forbes. In his free time, Duncan loves to deconstruct video games, which means that his loved ones are keenly concerned about the amount of time he spends looking at screens.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today