Facebook is testing a new feature that will allow users to report other users for extremist activity. It’s part of an effort to limit the amount of hate speech and other harmful information that the platform has become associated with in recent history.
Facebook has received plenty of bad press over the past few years due to their inactivity when faced with extremist content, with many sources citing them as a core reason for the rabid political discourse and division in the USA.
While social media sites used to be able to hide behind Section 230, a law that stated websites were not responsible for the posts of their users, a recent amendment in 2018 has stated that any human rights violations associated with the website are the responsibility of the site itself, leading to companies cracking down on these instances.
What Is Facebook Testing?
Facebook’s new anti-extremist system looks like it’s going to take two different forms of approach. Firstly, Facebook will let you know if it detects you being exposed to any kind of extremist material.
Facebook randomly sent me this notice about extremism when I clicked over to the app. Pretty weird… The Get Support button just goes to a short article asking people not to be hateful.
After enabling genocide, this feels like a bandaid on a festering open wound. pic.twitter.com/BsPFFZatx8
— Kit O’Connell 😷🌈 (@KitOConnell) July 1, 2021
Secondly, Facebook may actually ask their users to report people that they think may fall under the extremist umbrella.
Facebook is now signposting help if you think a friend is becoming an extremist 😳
h/t @disclosetv pic.twitter.com/7L5B0UORzj
— Matt Navarra (@MattNavarra) July 1, 2021
When unveiling these new features, a Facebook spokesperson had this to say:
“This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk.”
While the detection of extremist content would be difficult through use of an algorithm, due to how many complex forms extremist rhetoric can manifest itself, relying on users will also have its own downfalls, as users with extremist friends are likely to share similar views in the first place.
What Could This Mean for the Future of Facebook?
While both of these new detection systems are in their infancy, users are already having misgivings about the potential usages of them. Since part of Facebook’s strategy relies on user reports, people are already predicting that it will be open to abuse, as what one person views to be extremist may just amount to another opinion.
Putting aside the potential for misuse, most users agree that these systems don’t go far enough to make an impact on the core issues at hand. Facebook’s poor efforts to date are mentioned frequently as a key reason so much misinformation and hateful discourse is spread across the internet, and these updates seem unlikely to turn the tide.
While this is, on paper, a step in the right direction for Facebook, monitoring the accuracy of harmful content detection from users and automated systems, and ensuring that adequate follow up action is carried out, will need to be developed significantly before this move can really serve the function it needs to.