January 4, 2018
Whether you're a casual user, an unabashed addict, or one of the weirdos without any online presence at all, social media has affected your life in one way or another. It could've been by connecting you with an old friend through Facebook, or alerting you of an upcoming event on Twitter, or even finding you a job on LinkedIn. Either way, social media's impact on the world of today is undeniable, no matter how much of a skeptic you are. Unfortunately, with this level of connectivity comes with it the similarly undeniable presence of hate speech. And the storied history of their tumultuous relationship has quite a few twists and turns.
From the relatively unhindered hate speech purveyors of the early years to the laws being put in place to ensure an amicable future for these insanely popular platforms, social media and hate speech have walked hand-in-hand through history like a middle school couple worried that letting go would ruin the very thing that made them interesting in the first place.
For years, hate speech went relatively unchecked on social media sites. Under the guise of free speech, harassers, stalkers, racists, sexists, xenophobes, and all-around bad people were allowed to type whatever they wanted whenever they wanted to whoever they wanted with impunity. And they did so. Quite often.
According to a study from the Pew Research Center, 41 percent of American adults have experienced some form of online harassment. In the context of the study, online harassment refers to offensive name-calling (27 percent have experienced), purposeful efforts to embarrass them (22 percent), physical threats (10 percent), stalking (7 percent), harassment over a sustained period of time (7 percent), or sexual harassment (6 percent).
And that's just online harassment. Hate speech, which is defined as online harassment with a race, religion, ethnic origin, sexual orientation, disability, or gender-focused twist on it, has a similarly strong presence online, due mostly to the fact that the number of hate groups in the US has tripled since 2009. And with social media being an integral tool for these groups to spread their message, it's no wonder sites like Facebook and Twitter have become hotbeds for hate speech.
Currently, social media platforms have begun making serious efforts to combat hate speech. Unfortunately, most of them are still scrambling to come up with an effective means of properly and appropriately dealing with the problem a meaningful way. Some have been decidedly more successful than others, but still no one has found the silver bullet that will end hate speech on social media once and for all. But we're getting there.
A little more than a year ago, Twitter announced a troll muting function for users that didn't want to be bothered by harassers. It was their first significant step in the right direction to eradicating hate speech from their infamously cruel platform. Granted, it came nearly ten years after the platform launched, but hey, better late than never. Unfortunately, that step didn't lead to a run, a jog, or even a walk in the right direction.
While CEO Jack Dorsey has iterated and reiterated the importance of getting rid of these hateful tweeters on numerous occasions, little has been done to impede the onslaught of insults, slurs, and generally offensive behavior. Despite countless “policy changes,” the platform continues to value free speech over the necessary condemnation of these trolls, leading to an understandable pushback from protestors.
When it comes to hate speech on their platform, Twitter is admittedly walking a tightrope between free speech and verifiable monsters. And while a social media platform is far from the Supreme Court, it does present an issue unlike any other. With celebrities, sports stars, and current presidents of the United States utilizing it for everything from daily updates to global threats, it can be hard to put in place a system that can effectively decipher what is considered hate speech and what isn't, particularly when you're trying to get an algorithm to do it for you.
Speaking of algorithms, Facebook's isn't working either. As easily the most popular social media platform in the world, the importance of eliminating hate speech is paramount to their continued success. Unfortunately, the algorithm in place to take down offensive posts is far from perfect, often leaving incredibly hateful posts up due to ridiculous loopholes while banning users that post nothing more than playful quips.
For example, as one New York Times article pointed out, the phrase, “Female sports reporters need to be hit in the head with hockey pucks,” would not be flagged, despite the threat of violence and sexist message, because occupation (“reporter”) is not a protected group under Facebook's system. Similarly, as shown in the internal Facebook screenshot below, provided by ProPublica, black children do not fall in the protect group, as “children” does not qualify.
The best example of this gaping loophole occurred less than a month ago, when comedian Marcia Belsky was banned for 30 days on Facebook for commenting, “men are scum” on a friend's post about the abuse she had received as a result of a post on how men could treat women better. In solidarity, a group of women comedians posted variations of the phrase, similarly being banned. While Facebook has made statements claiming that the ban was a mistake, comedians have come forward, saying that they are still being banned, and are even having previous comments deleted in older posts.
The sheer volume of Facebook posts, comments, and replies is far too much for one company to handle without the help of an algorithm. The reality is though that entrusting such a sensitive role to a computer has obvious downfalls, particularly when the role in question requires a decidedly human eye to get it right 100 percent of the time. Unfortunately, until Facebook can get this algorithm sorted out, hate speech will continue to flourish, while everyday people take the brunt of the action. But, with the way things are going with lawmakers, they don't have a lot of time to figure it out.
While self-policing has been the agreed upon strategy for social media companies dealing with hate speech, some countries have grown weary of the slow and ineffective systems in place. Germany, for example, just put into effect a law that would fine social media companies like Facebook and Twitter to the tune of $60 million for not removing hateful content on their website in a timely manner (sometimes less than 24 hours).
Despite it being only a few short days into the new year, the law has already claimed its first victim in far-right politician Beatrix von Storch, who casually referred to Muslims as barbarians on Facebook and Twitter. While she was temporarily banned (as many of us have been for far less), she appears to be reinstated now, although it likely won't last long based on the strict nature of the law.
The primary argument against this law is that it may impede press freedom, as social media companies will likely err on the side of caution and delete anything that even resembles hate speech. However, I think we can all agree that it might be time to bend this hate speech stick the other way to correct a wrong that has been prevalent in our society for more than a decade. And if it takes a couple of million dollar fines to do it, so be it.
Read more about social media trends on TechCo
Did you like this article?
Get more delivered to your inbox just like it!
Sorry about that. Try these articles instead!