French Muslim Group to Sue Facebook and YouTube

The French Council of the Muslim Faith (CFCM) has announced that it is suing both Facebook and YouTube, arguing that the soci

The French Council of the Muslim Faith (CFCM) has announced that it is suing both Facebook and YouTube, arguing that the social media platforms have incited violence by hosting streaming video footage of the Christchurch massacre in New Zealand.

The social media services are taking down the footage as quickly as it’s put up by users who continue to share it. But, in creating the technology that allows such repellant acts to be shared globally, do Facebook and YouTube have a case to answer for?

Why are CFCM Suing Facebook and YouTube?

CFCM is a non-profit group created in 2003 to help the French state better represent the Muslim community. The president of the CFCM’s Islamophobia monitoring unit, Abdallah Zekri, said CFCM had launched a formal legal complaint in France against the two social media companies.

Another advocacy group, the Federation of Islamic Associations of New Zealand (FIANZ), has voiced similar concerns:

Reuters reports: “They have failed big time, this was a person who was looking for an audience and … you were the platform he chose to advertise himself and his heinous crime,” FIANZ spokesman Anwar Ghani said in a statement directed at Facebook.

The anti-Muslim hate crime occurred on March 15, at two mosques in New Zealand, where a shooter killed 50 people and livestreamed the deaths to Facebook in a 17 minute video. This footage was then downloaded and reposted across the internet. As a major social video sharing site, YouTube was one of the main sources for re-uploaded versions of the footage.

Did Facebook and YouTube Do Enough?

Facebook has stated that it removed “1.5 million videos of the attack globally” within the first 24 hours following the attack, of which “over 1.2 million were blocked at upload.” However, critics reported being able to locate copies on Facebook, despite the company’s actions. On YouTube, videos were reported up for at least eight hours directly after the event, despite YouTube’s efforts to remove them.

Notably, Facebook used “a combination of technology and people” to remove violating content. With audiences in the billions, Facebook and YouTube have a scale problem. They simply can’t use the safest solution, entirely human curators, and must instead rely on algorithms to an extent.

YouTube identified and removed videos uploaded as quickly as one per second in the hours following the attack. YouTube’s AI detection tools were fooled by the uploaders, who added watermarks, resized clips, or animations in order to change the original footage enough to slip through. “This incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done,” YouTube’s chief product officer Neal Mohan said of the event.

https://twitter.com/sheeraf/status/1106402124311613441

They weren’t the only social network facing moderation decisions: Reddit banned two popular subreddits, r/watchpeopledie and r/gore, on March 15 due to uploads of copies of the Christchurch video.

The Push for Better Moderation

A recent exposé on Facebook’s content moderators from The Verge reveals details that suggest Facebook may not be devoting enough resources to the issue (moderators earned just $28,800 per year, and receive no support for “PTSD-like symptoms” they develop after leaving the job) and even demonstrated insensitivity to the Muslim faith within its moderating teams (The Verge reports two Muslim employees were told they couldn’t pray during their nine minutes per day of “wellness time”).

In an open letter, the Chairman of Homeland Security has criticized four major tech companies for not doing enough to remove violent terrorist content in the wake of the shooting, including Twitter and Microsoft, alongside Facebook and YouTube.

“You must do better,” Rep. Bennie G. Thompson wrote. “It is clear from the recent pattern of horrific mass violence and thwarted attempts at mass violence — here and abroad — that this is not merely an American issue but a global one. Your companies must prioritize responding to these toxic and violent ideologies with resources and attention. If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms — including by studying the examples being set by other countries.”

New Zealand’s reaction to the event seems to align with CFCM’s argument that hosting the video is inciting violence. Not only did New Zealand’s response include a swiftly enacted “ban on all military-style, semi-automatic weapons,” but it also included a ban of a 74-page manifesto from one of the shooting suspects, because it “promotes murder and terrorism.”

How Social Media Platforms Deal with Accountability

Social media giants have taken strides to address a variety of content moderation concerns across the past few years. YouTube, in particular, has dealt with conspiracy theories by attempting to downrank misleading footage and, in February, moving to demonetize videos advocating against vaccines.

Around the same time, YouTube faced another issue – reports of a “soft-core pedophilia ring” led YouTube to block comments on videos prominently featuring children. Finally, YouTube deleted a second channel used by already-banned conspiracy theorist Alex Jones, on which Jones had uploaded a video claiming the Christchurch shooting was a “false flag” attack.

Social media is also increasingly called out by governments for a lack of social empathy. A recent UK government report has noted a correlation between social media use and mental health or self-esteem issues. Facebook’s failures with tamping down disturbing and violent video have been critiqued in the past, as have YouTube’s.

Why do Facebook and YouTube consistently face moderation failures? The bottom line is that, at big tech companies, moderation of content comes second to a core mission to sell data and advertising. That needs to fundamentally change to solve the issue. If it does, it’ll happen slowly and only due to greater pressure from governments and users.

Thanks to their scale, social media platforms face unprecedented challenges in moderating online content. Public scrutiny, whether from US representatives or from French-Muslim advocacy groups, is still on the rise.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Adam is a writer at Tech.co and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Explore More See all news
Back to top