Some of the world’s largest social media companies and publishers are joining forces in an attempt to quell the amount of ‘fake news’ that circulates on the internet, with major players such as Facebook, Twitter, Google and the BBC all committing to the cause.
The move will see organizations share information more readily in an attempt to flag potentially untrue news stories, as well as collaborate on ways to display information- such as voting details- to avoid confusion.
While some partner groups already have internal solutions to tackle fake news, it’s hoped that this new project will produce a more consistent model that can be followed and easily understood by users. Here’s how the new partners will deal with deception.
What Will the Partnership Look Like?
Earlier in the year the BBC organized what it dubbed a ‘Trusted News Summit’, inviting companies such as Facebook, Google and Twitter, as well as more traditional publishers including CBC, Reuters and The Wall Street Journal, to collaborate on fake news. While the project is still in its infancy, details have recently been shared about the commitments that attendees now plan to realize, to reduce the amount of disinformation on its platforms. These include:
- Early Warning System: A system will be designed to easily share information between all involved parties when disinformation is discovered that ‘threatens human life or disrupts democracy during elections’. The focus is to move quickly and intervene the information before it can reach too many people.
- Media Education: An online platform will be launched to further education about media online, and its accuracy.
- Voter Information: The groups will collaborate on how to handle the way that information is communicated around elections, to ensure a ‘common way to explain how and where to vote’.
- Shared Learning: Very much behind the ethos of the entire campaign is sharing information about what does, and doesn’t work, with regards to fake news.
There’s more to come according to the BBC, with full commitments and plans to be published at a later date.
Disinformation and so-called fake news is a threat to us all. At its worst, it can present a serious threat to democracy and even to people’s lives. This summit has shown a determination to take collective action to fight this problem and we have agreed some crucial steps towards this. – Tony Hall, Director-General of the BBC and European Broadcasting Union President
The Current Problem with Fake News
The term fake news came into popularity after being repeatedly used by President Trump during and after the 2016 elections, to attack his critics. It has been a constant theme ever since. The originator of the phrase has used it as recently as this week, to respond to negative reports of his alleged weather map tampering.
Perhaps the biggest issue that publishers face when it comes to fake news is that whether something is declared true or not can be entirely subjective. For some, the willingness to believe a story that supports their own views can outweigh a rational mind. Another serious problem is that fake news can negatively affect our memories. Information released last month from a study into the phenomenon found that half of those surveyed had created false memories when faced with made-up events. Even after it was revealed that the stories weren’t true, many still refused to question their fabricated memories.
And there’s another issue with the spread of fake news that will be harder for content platforms to catch: In a 2018 study conducted by researchers at Princeton University, it was found that people received much more disinformation from the President himself than they did from fake news websites.
Previous Attempts to Stop Fake News
While companies such as Google and Facebook have been active in stopping the spread of disinformation in the past, no one solution has proved ideal, and responses haven’t always been immediate.
In 2017, Facebook introduced a new reporting tool that allowed users to report posts as fake news, after which they were referred to a third party for independent validation. For stories that were found to be inaccurate, a warning would pop up when users attempted to share it, stating that independent fact checkers had found it to be inaccurate. In 2018, Facebook stated it had improved the context around such stories, by including information from the relevant Wikipedia entry on the subject.
Google has recently been playing its part in stemming the flow of fake news by not only demonetizing videos on its YouTube platform that feature a pro anti-vaccination agenda, but also including a warning message pulled from Wikipedia explaining that ‘vaccine hesitancy’ is one of the top ten global threats, according to the World Health Organization.
Similarly, YouTube has also cracked down on conspiracy videos for subjects such as ‘flat’ earth and aliens, by making them less appealing to its content algorithm, and less likely to be recommended to viewers.
While all attempts to quash fake news are worthwhile, it’s clear more needs to be done. No time frame has yet been released for when the ‘Trusted News Summit’ actions are to be implemented.