OpenAI’s massively popular AI bot, ChatGPT, has faced plenty of legal concerns. Now, a new privacy complaint is resurfacing a big one: ChatGPT tends to make up information and present it as if it’s true.
The privacy rights group noyb is behind the complaint, which centers on an unnamed public figure who asked ChatGPT about the date of his birthday and received incorrect answers in response.
For years, ChatGPT and similar generative AI tools have failed to stop their habit of lying during simple conversations. With this new complaint, they risk legal consequences for perpetuating inaccurate personal data.
What’s ChatGPT Being Called Out For?
The term “AI hallucination” refers to any incorrect or misleading result that AI models might generate. Since they’re all large language models, they’re trained on huge amounts of data, which they can then replicate or rearrange in seemingly authentic ways. But the models often prioritize verisimilitude over facts.
In one of the most infamous examples, Google’s chatbot Bard gave a false answer during its own Feburary 2023 launch announcement when it inaccurately claimed that the James Webb Space Telescope had captured the first pictures of a planet outside our solar system.
This just in! View
the top business tech deals for 2024 👨💻
Those hallucinations can be as simple as ChatGPT refusing to admit that it doesn’t know a public figure’s birthday, and it’s a big problem for a tool that is billed as a way to handle simple tasks. The new complaint hopes to draw legal action against the company to address the problem.
“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.” ~ Maartje de Graaf, noyb data protection lawyer.
Why the New Privacy Complaint Matters
The noyb project — whose name, I’m delighted to tell you, stands for “none of your business” — is a nonprofit group dedicated to addressing “commercial privacy issues on a European level, i.e. privacy violations of your digital rights as a private citizen by companies and corporations,” according to its website.
Due to the launch of the EU’s GDPR (General Data Protection Regulation) in 2018, watchdogs like noyb can highlight the biggest privacy violations that may be impacting people across Europe.
GDPR compliance failures can result in fines as high as 4% of global annual turnover, so they’re nothing to sneeze at. The right GDPR ruling can set a precident for how all generative AI bots are handled, not just the standard-bearing ChatGPT.
Should AI and Other New Tech Be More Closely Regulated?
Granted, there are plenty of other stories that have more obvious negative impacts than getting a birthday wrong. For example, some unscrupulus ebook writers have been AI-generating entire guides to mushroom foraging that might wind up poisoning anyone who takes them as the truth.
But the new complaint has one benefit over other, more shocking problems with AI: It makes clear that OpenAI’s ChatGPT is the party worth targeting, rather than ebook hustlers. The complaint aims to bring ChatGPT’s penchant for mistruths in the light, specificially so that legal standards can be set.
Are regulations good for the tech world? A certain anti-regulation ethos has been a big part of Silicon Valley’s ethos for over a half-century, thanks to the many libertarians, techno-rationalists, and “move fast and break things” proponents in its ranks.
Ultimately, your answer will likely depend on how comfortable you are with everything that today’s data-hungry tech corporations are getting up to. Some people prefer to move slow and fix things.
When it comes to big AI companies, however, they’re still moving too fast: Soon after that Google Bard launch failure, reports surfaced claiming that Google had overruled a risk assessment that had caught the AI tool lying.