OpenAI, the company who owns ChatGPT, has quietly confirmed it is shuttering the AI detection tool it launched back in January.
In a brief note, OpenAI has said that its AI Classifier software was being pulled due to the “low rate of accuracy” it exhibits when trying to distinguish between human and AI created content.
The firm added that it will continue to research “effective provenance techniques” for text, as well as working to develop tools that can tell if audio and visual content has been made using generative AI.
Should AI Classifier Ever Have Launched?
News that AI Classifier was being effectively recalled wasn't shared widely by OpenAI, but rather added as an update to the blog post that first announced the tool at the start of the year.
This was first spotted by Decrypt (to the best of our knowledge) and what's interesting to note re-reading the original post is that the tool was launched despite OpenAI's admission that it was “not fully reliable.” In practice, this means an approximate accuracy rate of just 26% at time of launch and particularly poor performance when judging text of fewer than 1,000 characters.
That number clearly hasn't climbed high enough for OpenAI to be comfortable continuing to promote AI Classifier, as the link to the tool is now dead. This can't help but beg the question of if it should have publicly launched at all, though fortunately there are a number of other good ways to detect AI plagiarism.
Plagiarism and the Wider AI Ethics Debate
Plagiarism is one of the many issues that features in the increasingly heated debate around AI and ethics.
While many people agree that artificial intelligence has reached a point where it can do some pretty cool stuff, at present there are few checks and balances on the industry, beyond some non-binding guidelines from the likes of the White House and United Nations.
A global AI summit being convened in the UK this Fall might change that, but right now AI plagiarism seems like just the tip of the iceberg, with some people fearing the unchecked development of superintelligence could one day threaten humanity as a whole. At the very least, it's currently making the lives of educators and other folks charged with verifying the originality of written work more difficult than ever.
AI's Double-Edged Sword in the Spotlight
Right now, the threat from AI appears to be largely existential. This double-edged AI sword couldn't be better illustrated than by Apple recently: one of the first big companies to ban AI and also now the proud developer of its own AppleGPT chatbot.
It's an almost comic tension. On the one hand, who wouldn't want to start taking advantage of the many ways businesses can use ChatGPT and similar tools? At the same time, AI has yet to prove itself in terms of accuracy and reliability, not to mention seeming to threaten jobs at a time when tech industry layoffs are increasingly another sad “new normal.”
In other words: can you trust AI when, as today's news highlights, AI doesn't even fully trust itself? As with any burgeoning technology, our experience tells us these are the finest of fine lines, set to be drawn and redrawn many times before any kind of consensus is reached.