Moderation Will Be the Most Controversial Social Media Issue of 2021

The debate over who's responsible for moderating social media and when this verges on censorship will only escalate.

The internet is a massive, near limitless medium, and there’s no shortage of channels through which someone can voice a deliberately heinous or deeply misinformed idea. Many believe that social media platforms have an ethical responsibility to manage and moderate misinformation and hate speech spread by their users. This issue came to a head in the tumult of 2020’s political tribalism, racial justice movements and the COVID pandemic, and it’s not going anywhere in 2021.

Facebook is the first website that comes to mind when thinking about an online forum full of fake news and hatred. But it’s hardly alone, and platforms from Twitter to Wikipedia are all debating the correct remedies for the spread malice and misinformation. Companies are taking drastically different approaches to regulating their content, spurring a debate about the line between moderation, deplatforming and all-out censorship of free speech.

The internet fosters toxicity like few other places on earth – so much so that governments are beginning to draft legislature to help control the spread of fake news and limit online hatred. When it comes to moderation, everyone has an opinion, but there’s precious little consensus on what on earth to do.

The Problem with Moderating Social Media

Many social media companies are viewed as lacking the ability or even the willingness to moderate content on their platforms. While most of them have a report function, this almost always addresses individual instances of hate speech or misinformation, and not the root problem of how such messages can spread.

There isn’t much money in self-moderation. In fact, by cutting people off for saying something factually incorrect or harmful, all that company does is serve to lose a customer, so this issue boils down to something almost impossible: asking a large company to put morality over profit.

The issue boils down to something almost impossible: asking a large company to put morality over profit.

It’s no surprise that social media has its issues. Anyone over the age of 13 can create a Twitter or Facebook account and present themselves and their lives in any way they like, as (un)truthfully and (un)controversially as they wish, for the world to see and respond to. Prolonged usage of social media has been found to result in greater feelings of sadness and loneliness, and an increase of anti-science agenda recently has even slowed the COVID pandemic response, with the impact on loss of life likely to be analyzed and debated for years to come.

The problems aren’t limited to personal profile-led sites like Facebook and Twitter, either. Professional social network LinkedIn has a large problem with men trying to romantically solicit women, and platforms for sharing video games over the internet have been tackling aggression (both from foes and teammates) since the creation of the medium itself.

What is the Future of Internet Moderation?

From the get go, the internet was always going to be a difficult beast to wrangle. When you give anyone the power to post anything, a lot of people are going to post whatever fits their preconceived agenda, whether it’s true or not. That’s especially true when they can hide behind a fake profile. And, a lot of people are going to post hate speech under the guise of the grayest area of all – free speech.

While the government can make strides to prevent hate speech or fake news, the sheer volume at which both of these things can be produced is higher than almost anyone can imagine. It falls on the individual platforms to take responsibility for their products, and impose swift and harsh penalties for anyone knowingly sharing false information or hate speech.

How harsh should a penalty be if someone shared something false without realizing? Who decides if something is hate speech or if it’s just a divisive opinion?

However, that’s where the line gets hazy. How harsh should a penalty be if someone shared something false without realizing? Who decides if something is hate speech or if it’s just a divisive opinion? The platforms themselves are going to be the ones deciding this, and if it loses them customers, they’re always going to regret it.

Mark Zuckerberg Facebook Medium

How Does Facebook Handle Moderation?

Facebook has become synonymous with a laissez-faire, arguably lazier side of moderation. With its massive size, Facebook has a massive problem with content moderation. An absurd volume of fake news and hate posts can spread on its platform, and problems extend to Messenger and WhatsApp, too. Plus, Facebook knows it has a problem, and an overhaul in its approach is now close to unavoidable.

Discourse on Facebook has become so frequently toxic and violent that an estimated 70% of the most active civic groups can’t be recommended to other Facebook users.

Some of these groups were responsible for the Capitol Hill riots being organized through Facebook — though, Facebook COO Sheryl Sandberg said that she believed these riots were mostly orchestrated elsewhere.

“I think these events were largely organized on [other] platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency.” – Sheryl Sandberg

Facebook claims that it’s cracking down on these groups. But, in reality its response has been lazy at best, malicious at worst, as it knows that if some users perceive this low tolerance for hate speech as censorship, then they’re likely to move elsewhere.

Donald Trump Helicopter

How Is Twitter Handling Moderation?

Ex-president Donald Trump has to left behind quite a legacy, including elevating Twitter toxicity to an unprecedented levels. The platform always had its fair share of vitriolic conversation, but as toxic politicised debate on Twitter was spurred on by his near-constant tweets, the mood of the website took a turn, and debate over moderation became a hot topic.

Twitter has usually taken a similar approach to Facebook’s – tackling smaller cases of absolutely unacceptable racism or personal attacks, but doing little to address root causes. After four years of stirring the pot, Trump was finally banned, but only when he had days left of his presidency, and after Twitter had already milked huge levels of traffic from his prominence on the platform.

Twitter’s ban on Trump prompted an immediate debate over where moderation ends and censorship begins

The result? An immediate debate over where moderation ends and censorship begins, with polarized choruses of backlash and support for Twitter’s move against Trump.

Twitter is getting experimental with moderation and reporting of misinformation, even devolving some responsibility of it to a select group of approved user with its new Twitter Birdwatch initiative. It’s new ground, and though out-sourcing moderation responsibility is certainly novel, it raises questions of how users are chosen for the initiative and how they will be protected themselves should they bear the brunt of accusations of censorship of others.

Parler Runs Aground with Non-Moderation Policy

Parler was made and marketed as a free-speech alternative to mainstream “fake news” media. This attracted a wide range of people who wanted to post opinions or actual fake news, and have it be accepted as fact, without risk of what they perceived to be censorship.

This quickly devolved into a hotbed of racism, antisemitism, and dangerous, anti-science conspiracy theories. And while the platform itself didn’t interject in what its users were posting, the app stores that hosted Parler soon demanded some degree of moderation.

First, companies like Apple and Google said that either Parler would control its users, or they would delist the app from their stores. After Parler took no steps to moderate its content, the app was quickly taken down.

Then, after the riots at Capitol Hill, Parler was removed from the internet entirely. The company itself is still hanging around, but who knows for how long? The CEO was just fired by the board of the company, and it’s possible that this is part of a longer list of steps towards Parler’s downfall.

How Does Wikipedia Handle Moderation?

Wikipedia, on the other hand, has a lot more regulations when it comes to content and their community. After all, this is a platform based on free and, more importantly, accurate information, so there is no room for fake news or biased hate speech.

One thing that Wikipedia has done to stand apart from the moderation approaches of other platform is to be quite deliberate in its efforts to embrace wider representation among its moderators.

On top of its already militant regulation of their content updates, Wikipedia has also recently introduced a new initiative that encourages a wider array of perspectives and representation, both across their team and their site.

“We need to be much more inclusive,” said María Sefidari, the chair of board of trustees for the non-profit Wikimedia Foundation. “We are missing a lot of voices, we’re missing women, we’re missing marginalized groups.”

While it’s a lot more difficult to manipulate and post lies on Wikipedia, there is also something to be said for Wikipedia’s harsher and quicker treatment when it is found, especially when compared to Facebook’s apathy.

What Can Social Media Users Do?

The everyday user can help combat hate speech and fake news online by reporting it, and checking that articles you share are from reputable sources.

For business users, social media marketing can be one of the most profitable channels for sales, brand growth, and connecting with your target market. But, where there’s social media, there’s risk of divisive or hostile comments, which can be unnerving on a professional account feed. Knowing that content moderation (or the lack of it) may affect or limit your small business posts could be a scary prospect. It can pay to use a social media management tool to organize your social media marketing campaigns and activity – these tools help businesses to see new responses, and respond to them smartly or decide when they need flagging.

Rather than rushing out a campaign that has some false information, or drowning under a storm of messages after making a mistake or faux pas, you can draft messages and organize your viewer’s engagement to keep everything under control. Or as controlled as it can be, until giants like Facebook put their consciences above their capital.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Duncan (BA in English Textual Studies and Game Design) is an Australian-born senior writer for Tech.Co. His articles focus on website builders, and business software that allows small businesses to improve their efficiency or reach, with an emphasis on digital marketing or accounting. He has written for Website Builder Expert and MarTech Series, and has been featured in Forbes. In his free time, Duncan loves to deconstruct video games, which means that his loved ones are keenly concerned about the amount of time he spends looking at screens.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today