On August 18, Google will roll out advertising bans and website demonetizations aimed at shutting down coronavirus misinformation.
Under Google's new guidelines, ads will be removed from any webpages found to be promoting information not backed by “authoritative scientific consensus,” while any websites that repeatedly violate this ban will be demonetized entirely.
It's another step from a major online platform towards combating conspiracy theories and misinformation. Is it enough?
What's getting banned?
Google's coronavirus offensive comes in three parts.
First, Google will entirely ban ads that themselves directly promote false information about the coronavirus pandemic. Second, the tech giant will remove ads that would otherwise appear on individual webpages, when those pages are found to be promoting unscientific approaches to COVID-19. And finally, when a website shows a pattern of false claims, all ads will be removed from it, even on pages that don't specifically promote false information themselves.
A Google spokesperson made a statement clarifying the reasoning behind the decision to The Verge, which covered the news.
“We are putting additional safeguards in place by expanding our harmful health claims policies for both publishers and advertisers to include dangerous content about a health crisis that contradicts scientific consensus,” read the statement.
It's not the first time Google's parent company Alphabet has taken a stance against coronavirus misinformation, although previous attempts have had mixed results, particularly on YouTube, the massively popular Alphabet-owned video platform.
Back in March, Google tried to ban all non-government ads mentioning coronavirus, but quickly walked back the move after pushback from democrats worried it would do more harm than good.
YouTube went through a similar dance in March, as it first issued a sweeping demonetization push affecting any videos that mentioned coronavirus, even informative, useful ones.
On March 11, YouTube rolled back the demonetization, with YouTube CEO Susan Wojcicki explaining that YouTube's sensitive events policy was aimed at stopping the short-term spread of misinformation in the wake of a major event, like the pandemic.
“It's becoming clear this issue is now an ongoing and important part of everyday conversation, and we want to make sure news organizations and creators can continue producing quality videos in a sustainable way,” Wojcicki explained.
YouTube has been taking different tacks in attempts to squash misinformation for years.
In August 2019, YouTube announced it would stop playing targeted ads across videos designed for child audiences. Later in the year, the video platform said that it had successfully dropped “borderline” content, its term for false or misleading content, by 70% since first beginning its plans to stop promoting those videos (while not removing them from the platform) at the top of the year.
Social platforms are assuming more responsibility
Google's latest step towards trying to stop bad actors is just one more sign of a significant shift in the tech community's biggest social platforms.
The tech industry has long valued the quality of “openness” — of giving users freedom to do and say what they want, under the assumption that the good stuff that results will ultimately outweigh the bad. That core belief has been challenged in recent years, with some arguing that an open platform can often simply allow pre-existing prejudices to perpetuate themselves, while offering all-new conspiracies a potential path towards mainstream acceptance.
Following Alphabet's more-or-less successful attempts to tamp down the spread of misinformation on their platforms, there still isn't a clear path forward towards a less-open but still kinda open internet. But if the latest coronavirus ad bans are any indication, Alphabet has decided that it's worth taking the trouble to find out.