Facebook recently revealed how it’s combating the spread of what it calls ‘false news’ on its service, using a team of fact checkers to verify stories. That’s false news, not fake news – a key distinction in how Facebook is branding the exercise. With fact-checking and penalties for guilty articles, it’s great to see the social media giant taking some level of responsibility. But does it go far enough?
In the first of a series of posts, Facebook states that it has collected three stories that had been posted on its site by users. It explained the fact-checking process, and reveals which were true, and which weren’t.
But is this process the right one, and what are the ramifications for the purveyors of false (or fake) news?
What is ‘False’ News?
Facebook has been on the defense in recent months, campaigning to get its users back on side again. And it’s no surprise. Public confidence in the company has dipped considerably in 2018, thanks to data breaches and concerning reports on how the company surfaces content. This is why there are apologetic posters on the subway, in newspapers and online. Facebook has even hired former UK Deputy Prime Minister Nick Clegg, as its new Head of Global Affairs.
It’s because Facebook wants users to trust the brand again. Rebuilding that trust is pretty key to its advertising-led profits, too.
That’s why Facebook is being pro-active in hunting down ‘false news’ on its site. But what exactly is it?
One thing’s for sure, false news certainly isn’t fake news. You might be forgiven for conflating the two, but it appears that Facebook has gone out of its way to make the distinction. You won’t find the other F-word mentioned anywhere, probably due to its partisan connotations.
To define false news, in the first of its blog posts, Facebook gives us three examples:
- Did a Saudi man really spit in a receptionist’s face?
- Is this the man who stabbed Presidential Candidate Jair Bolsonaro?
- Will NASA really pay you $100,000 to stay in bed for days?
To find out which of these were true and which were false, you’ll need to visit the Facebook blog yourself, but it seems that the focus is mainly on the political, which shouldn’t surprise anyone who has spent more than five minutes scrolling through their News Feed.
How is Facebook Flagging False News?
Facebook’s approach to flagging false news is incredibly similar to that of another myth-busting website, Snopes.
The long-established site takes popular internet stories of dubious pedigrees, researches them, and presents the facts it finds. Facebook’s approach and presentation is practically identical.
Facebook uses third-parties to vet stories, rather than doing the research itself. That’s probably for the best. However, there are still limitations, as the company states itself.
Although some false news is flagged with machine learning, there’s a limit to the number of man hours that can be dedicated to research, and Facebook doesn’t have fact checkers in every country. On its site it states it currently covers 14 countries. At the time of writing, there is no UK coverage, for example. That’s somewhat surprising given the ongoing investigation into Cambridge Analytica’s alleged manipulation of Brexit adverts for UK voters.
Although the blog only surfaced three stories in this instance, the implication is that the company has a large team working constantly to root out these articles.
You’ll find a well polished video on the Facebook site with staff talking earnestly about trust and its ramifications.
What Happens to Flagged Posts?
When the fact checking team flags a post as ‘false news’, it gets demoted, and its distribution in the newsfeed is reduced.
It will also flag to anyone who tries to share the story that it has been fact checked, including those who had shared it prior the process.
Facebook also states that it is harsh of repeat offenders of false news. Those that constantly spam the site with fake articles may have distribution of their page or website reduced – not just the article itself. Facebook will also withdraw its advertising revenue from those accounts.
The Problems With Facebook’s Solution
Facebook’s solution is admirable, but it clearly has a huge job ahead of it, and the company admits as much in its own content.
While demoting false news articles can help reduce the spread of misinformation, the time it takes a fact checker to be alerted to a story, research it and publish their findings means that in many cases the damage will be done. The Facebook newsfeed moves at a fast pace. Although false news is flagged retroactively, a story could have been shared thousands of times before a verdict is given.
There’s also the issue of who Facebook’s users will choose to believe. As mentioned earlier, Facebook is currently experiencing a massive lack of faith in its content, with one survey stating that 58% of users don’t trust the site. If a group claims that something is true, and Facebook claims it isn’t, who will the sites users side with?
It’s unlikely that the average Facebook visitor will care much about the third party accredited fact-checking team it’s using, if they’re even aware of them at all.
Then there’s the political landscape. For better or worse, Facebook has become the battleground for voters on both sides, and many refuse to have their opinions swayed. The Facebook blog itself is a good indicator of this problem. On one article about its attempts to reduce the spread of false news, posters have commented that they hope this means an end to pro-Democrat stories on one side, and pro-Republican on the other, with both claiming lies from their adversaries. Its hard to imagine appeasing both these sections of Facebook users, regardless of how robust the research is.
Facebook needed to do something, that much was certain. However, cleaning house for the company is no small task, even if the problem is one of its own creation. The US government has told the company to stem the flow of false information on the site, threatening that if it can’t, they’ll step in.