Report: Google Bard Could Be Used to Spread Misinformation

A new report from NewsGuard found that Google's generative AI platform is all-too ready to spread misinformation.

The rushed release of AI-powered chatbots may have some unexpected drawbacks, as a new report found that Google’s ChatGPT alternative Bard is all-too comfortable spreading misinformation if prompted the right way.

If you haven’t heard of ChatGPT and its many alternatives, you’ve likely been living under a rock for the last few months. The generative AI technology has taken the tech industry by storm, fueling dozens of copycats from the likes of Alibaba, Salesforce, and even Snapchat. These tools are designed to create content, develop code, and generally tackle mundane tasks at businesses around the world.

Unfortunately, while these AI tools have taken the load of off some employees, little has been investigated about how ChatGPT, Bard, and other iterations could be used to exasperate the growing problem of misinformation. Until now.

New Report Outlines Misinformation Problem for Google Bard

According to a new study from NewsGuard obtained by Bloomberg, Google Bard — the tech giant’s alternative to ChatGPT — is a bit too ready to spread misinformation.

The news reliability data service merely asked the generative AI chatbot to write something about “the great reset” as a far-right online pundit. The results were a conspiracy-laden rant touching on myriad of offensive and disproven theories about a wide range of problematic topics.

“For all the extraordinary promise of generative AI, it also presents a great threat to trust in information. The early launches of these services often respond to prompts about topics in the news with well-written, persuasive, and entirely false accounts of the news. This could become a force multiplier for those wishing to spread harmful conspiracy theories, healthcare hoaxes, and Russian disinformation at unmatched scale.” – Steven Brill, co-CEO of NewsGuard

That wasn’t the only example, either. NewsGuard input 100 different prompts asking the generative AI platform to create content about misinformation commonly found online, and Bard was happy to develop in-depth content on 76 of them.

How Has Google Responded?

Despite the decidedly rushed release of Bard, Google stands by its generative AI platform, noting that it has safeguards in place to prevent this kind of thing from happening.

“We have published a number of policies to ensure that people are using Bard in a responsible manner, including prohibiting using Bard to generate and distribute content intended to misinform, misrepresent, or mislead. We provide clear disclaimers about Bard’s limitations and offer mechanisms for feedback, and user feedback is helping us improve Bard’s quality, safety and accuracy.” – Robert Ferrara, a Google spokesman

Whether or not Google is actually committed to “focus on quality and safety” when it comes to Bard, the reality is that tech giants have proven time and time again that they don’t necessarily have the tools to rein in their own technology. Heck, social media companies have been trying to figure it out for more than a decade, and they’re clearly still a long way from getting it right.

The ChatGPT Backlash

This report isn’t the only damming piece of news for the generative AI industry. In fact, a wide range of vocal opponents have risen since the dawn of ChatGPT, begging the tech giants that are launching this technology to slow down for the sake of humanity.

An open letter was even penned by some tech professionals that calls for a six-month pause on the development of generative AI technology, so that legislative bodies can catch up and regulate the roll-out a bit more effectively. Elon Musk, Steve Wozniak, and Andrew Yang have all signed it.

Even more troubling, the country of Germany has recently debated banning the technology for fear of how it will impact daily life, and Italy has already pulled the trigger on blocking it entirely.

As is common with an industry that likes to “move fast and break stuff,” tech giants like Microsoft and Google have been uncomfortably flippant with the concerns about generative AI.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Conor is the Lead Writer for Tech.co. For the last six years, he’s covered everything from tech news and product reviews to digital marketing trends and business tech innovations. He's written guest posts for the likes of Forbes, Chase, WeWork, and many others, covering tech trends, business resources, and everything in between. He's also participated in events for SXSW, Tech in Motion, and General Assembly, to name a few. He also cannot pronounce the word "colloquially" correctly. You can email Conor at conor@tech.co.
Explore More See all news
Back to top