Artificial intelligence (AI) may be just as fallible as actual intelligence, with experts are starting to realize that AI mistakes, or “hallucinations,” could be a feature rather than a bug.
The tech world is betting big on the development of generative AI platforms like ChatGPT. Microsoft, Google, and dozens of other companies have rolled out the tech unto their various platforms over the last few months, making it a huge selling point for any business looking for a productivity boost.
However, generative AI platforms are far from perfect, and even some pioneers are getting worried that these mistakes are going to be a lot harder to fix that they originally thought.
Experts: AI Hallucination “Isn't Fixable”
While generative AI platforms have been lorded for their productivity-improving content creation, there have been more than a few reports pointing to their consistent inaccuracy. Yes, the technology is still in its infancy, but some experts have noted that a perfect version of generative AI is simply not possible.
“This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.” – Emily Bender, director of the University of Washington’s Computational Linguistics Laboratory
To make matters worse, the gamble on whether or not AI technology will eventually grow out of its early-stage missteps is a big one. Big tech companies like Google have grand plans for the technology, including news writing platforms that will be required to be completely accurate in the face of libel laws.
Generative AI Mistakes in Action
You may be thinking to yourself, how bad could these AI mistakes really be? After all, you've been using AI at your job without your boss' knowledge and everything seems rock solid so far, right?
The reality is that, on the grand scale of generative AI use, mistakes are all too common. From incorrect information to outright misinformation, these platforms are not perfect enough to handle the potentially high stakes tasks that businesses want them for.
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.
A recent example includes a law firm that was fined by a Manhattan district judge for using fictitious legal research created by ChatGPT. The case obviously had real world impact and was based on nothing more than a fabrication from a generative AI platform.
All that to say, the mistakes from generative AI platforms are a lot worse than just simple typos and left-out words; this technology can cause serious problems if it doesn't get more accurate and fast.
What Do AI Leaders Think?
As you can imagine, the heads of AI firms like OpenAI and Anthropic are less concerned, although their optimism is notably underscored by a humble understanding that there's a long way to go before these platforms are even close to reliable.
“I don’t think that there’s any model today that doesn’t suffer from some hallucination. They’re really just sort of designed to predict the next word. And so there will be some rate at which the model does that inaccurately.” – Daniela Amodei, president of Anthropic, the developer of Claude 2.
Luckily, AI leaders are, in fact, aware that their platforms aren't churning out the cream of the crop when it comes to content, even if they are frustratingly flippant about that fact.
“I probably trust the answers that come out of ChatGPT the least of anybody on Earth.” – Sam Altman, CEO of OpenAI, developer of ChatGPT