Google Reportedly Overruled a Risk Assessment to Launch Bard AI

Senior Google leadership reportedly sent out a "code red" in December 2022 to accelerate AI programs.

The tech world’s AI gold rush might have been a little hasty: Google ignored employee warnings and even overruled an official risk assessment in order to launch its AI chatbot Bard, reports claim.

The alleged warnings took place in internal messages seen by thousands in February 2023, while the risk assessment happened a month later in March, the same month in which Bard debuted to the public.

Google’s Bard is one of the highest profile chatbot AIs, alongside OpenAI’s ChatGPT. Both have faced plenty of public scrutiny. One of the biggest claims against the tools is that they lie regularly and convincingly, a potentially disastrous downside to the tech that aligns with the same warnings that Google reportedly ignored.

Google Staff Questioned Bards Readiness

The new reports, out from Bloomberg, say that employees were allowed to test out Bard in February and many issued warnings that the tool was not ready for a public release.

Employees reportedly stated that Bard was a liar, and not ready for launch in its current state, flagging major failing of the AI.

One employee warning reportedly stemmed from their conversation with Bard on how to land a plane and how to scuba dive — two potentially deadly activities that don’t have a lot of room for advice from, say, pathological liars.

The employee’s conclusion, reportedly, was that Bard’s answers could result have a harmful impact on users if followed to the letter.

Risk Assessment Was Reportedly No Match for AI Race to Launch

A risk assessment in March was also overruled, Bloomberg says. Google’s push to release its own generative AI bot was at least partially spurred by a competitor: OpenAI’s ChatGPT and its integration with search rival Bing.

Another reported claim from Bloomberg holds that senior Google leadership sent out a “code red” in December 2022 to accelerate its AI programs, leading to ethical compromises.

Doublecheck Your AI Conversations

The takeaway for anyone dabbling in chatbots — whether they’re from Google, Microsoft, or any other of a handful of competing free and paid services — is to take everything the bots say with a grain of salt.

All generative AIs are pulling from databases of knowledge, but they don’t always mix and match the information they receive correctly. The full list of issues to worry about is even larger, too: Chatbots should be checked for plagiarism, copyright violations, and poor mathematical skills, among a few other concerns.

AI chatbots certainly can contribute plenty of benefits in plenty of cases. Just don’t rely on their advice the next time you go sky diving.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Adam is a writer at Tech.co and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today