The tech world’s AI gold rush might have been a little hasty: Google ignored employee warnings and even overruled an official risk assessment in order to launch its AI chatbot Bard, reports claim.
The alleged warnings took place in internal messages seen by thousands in February 2023, while the risk assessment happened a month later in March, the same month in which Bard debuted to the public.
Google’s Bard is one of the highest profile chatbot AIs, alongside OpenAI’s ChatGPT. Both have faced plenty of public scrutiny. One of the biggest claims against the tools is that they lie regularly and convincingly, a potentially disastrous downside to the tech that aligns with the same warnings that Google reportedly ignored.
Google Staff Questioned Bards Readiness
The new reports, out from Bloomberg, say that employees were allowed to test out Bard in February and many issued warnings that the tool was not ready for a public release.
Employees reportedly stated that Bard was a liar, and not ready for launch in its current state, flagging major failing of the AI.
One employee warning reportedly stemmed from their conversation with Bard on how to land a plane and how to scuba dive — two potentially deadly activities that don’t have a lot of room for advice from, say, pathological liars.
The employee’s conclusion, reportedly, was that Bard’s answers could result have a harmful impact on users if followed to the letter.
Risk Assessment Was Reportedly No Match for AI Race to Launch
A risk assessment in March was also overruled, Bloomberg says. Google’s push to release its own generative AI bot was at least partially spurred by a competitor: OpenAI’s ChatGPT and its integration with search rival Bing.
Another reported claim from Bloomberg holds that senior Google leadership sent out a “code red” in December 2022 to accelerate its AI programs, leading to ethical compromises.
Doublecheck Your AI Conversations
The takeaway for anyone dabbling in chatbots — whether they’re from Google, Microsoft, or any other of a handful of competing free and paid services — is to take everything the bots say with a grain of salt.
All generative AIs are pulling from databases of knowledge, but they don’t always mix and match the information they receive correctly. The full list of issues to worry about is even larger, too: Chatbots should be checked for plagiarism, copyright violations, and poor mathematical skills, among a few other concerns.
AI chatbots certainly can contribute plenty of benefits in plenty of cases. Just don’t rely on their advice the next time you go sky diving.