OpenAI Hit With Privacy Complaint Over ChatGPT Hallucinations

"If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals."

OpenAI’s massively popular AI bot, ChatGPT, has faced plenty of legal concerns. Now, a new privacy complaint is resurfacing a big one: ChatGPT tends to make up information and present it as if it’s true.

The privacy rights group noyb is behind the complaint, which centers on an unnamed public figure who asked ChatGPT about the date of his birthday and received incorrect answers in response.

For years, ChatGPT and similar generative AI tools have failed to stop their habit of lying during simple conversations. With this new complaint, they risk legal consequences for perpetuating inaccurate personal data.

What’s ChatGPT Being Called Out For?

The term “AI hallucination” refers to any incorrect or misleading result that AI models might generate. Since they’re all large language models, they’re trained on huge amounts of data, which they can then replicate or rearrange in seemingly authentic ways. But the models often prioritize verisimilitude over facts.

In one of the most infamous examples, Google’s chatbot Bard gave a false answer during its own Feburary 2023 launch announcement when it inaccurately claimed that the James Webb Space Telescope had captured the first pictures of a planet outside our solar system.

 

About Tech.co Video Thumbnail Showing Lead Writer Conor Cawley Smiling Next to Tech.co LogoThis just in! View
the top business tech deals for 2024 👨‍💻
See the list button

Those hallucinations can be as simple as ChatGPT refusing to admit that it doesn’t know a public figure’s birthday, and it’s a big problem for a tool that is billed as a way to handle simple tasks. The new complaint hopes to draw legal action against the company to address the problem.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.” ~ Maartje de Graaf, noyb data protection lawyer.

Why the New Privacy Complaint Matters

The noyb project — whose name, I’m delighted to tell you, stands for “none of your business” — is a nonprofit group dedicated to addressing “commercial privacy issues on a European level, i.e. privacy violations of your digital rights as a private citizen by companies and corporations,” according to its website.

Due to the launch of the EU’s GDPR (General Data Protection Regulation) in 2018, watchdogs like noyb can highlight the biggest privacy violations that may be impacting people across Europe.

GDPR compliance failures can result in fines as high as 4% of global annual turnover, so they’re nothing to sneeze at. The right GDPR ruling can set a precident for how all generative AI bots are handled, not just the standard-bearing ChatGPT.

Should AI and Other New Tech Be More Closely Regulated?

Granted, there are plenty of other stories that have more obvious negative impacts than getting a birthday wrong. For example, some unscrupulus ebook writers have been AI-generating entire guides to mushroom foraging that might wind up poisoning anyone who takes them as the truth.

But the new complaint has one benefit over other, more shocking problems with AI: It makes clear that OpenAI’s ChatGPT is the party worth targeting, rather than ebook hustlers. The complaint aims to bring ChatGPT’s penchant for mistruths in the light, specificially so that legal standards can be set.

Are regulations good for the tech world? A certain anti-regulation ethos has been a big part of Silicon Valley’s ethos for over a half-century, thanks to the many libertarians, techno-rationalists, and “move fast and break things” proponents in its ranks.

Ultimately, your answer will likely depend on how comfortable you are with everything that today’s data-hungry tech corporations are getting up to. Some people prefer to move slow and fix things.

When it comes to big AI companies, however, they’re still moving too fast: Soon after that Google Bard launch failure, reports surfaced claiming that Google had overruled a risk assessment that had caught the AI tool lying. 

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Adam is a writer at Tech.co and has worked as a tech writer, blogger and copy editor for more than a decade. He was a Forbes Contributor on the publishing industry, for which he was named a Digital Book World 2018 award finalist. His work has appeared in publications including Popular Mechanics and IDG Connect, and his art history book on 1970s sci-fi, 'Worlds Beyond Time,' was a 2024 Locus Awards finalist. When not working on his next art collection, he's tracking the latest news on VPNs, POS systems, and the future of tech.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today