An artificial intelligence (AI) ethics group has asked the Federal Trade Commission (FTC) to investigate OpenAIs GPT-4 technology, claiming that the powerful language model is “bias, deceptive, and a risk to privacy and public safety.”
Since OpenAIs chatbot first exploded onto the scene, it’s attracted over 100 million users, prompted copycat models by companies like Google and SnapChat, and sparked widespread fears around issues like job insecurity.
This formal complaint follows an open letter signed by Elon Musk, Steve Wozniak, and other notable figures in tech which pleads for AI development to be paused immediately. But do tools like GPT-4 really cause a threat to humanity?
Pressure is Being Put on the FTC to Halt OpenAI’s New Releases
The Center for AI and Digital Policy (CAIDP) has just filed a complaint to the FTC, citing the potential dangers of GPT technology and calling for a temporary freeze on “large generative AI experiments” conducted by companies like OpenAI.
The complaint accuses OpenAI of violating the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and Section 5 of the FTC Act, which outlaws “unfair or deceptive acts or practices in or affecting commerce”.
“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.” – Marc Rotenberg, president of CAIDP
They also criticize the app for failing to meet the FTC’s standard of being “transparent, explainable, fair and empirically sound while fostering accountability.”
Major Names in Tech Sign Open Letter to Freeze AI Development
CAIDPs’ objection comes just a few days after the Future of Life released an open letter demanding for OpenAI and fellow AI researchers pause work for at least six months, to make space for discussions around its ethics.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” – Open Letter from the Future of Life Institute
The letter commented on the “out-of-control race to develop ever more powerful digital minds” that no one can “understand, predict, or reliably control”. It also outlined that if the pause couldn’t be enacted quickly, governments should step in to institute a moratorium.
This letter was signed by some of the leading names in tech, including the CEO of Twitter and co-founder of OpenAI, Elon Musk, and the co-founder of Apple, Steve Wozniak.
Musk’s U-turn against AI is, understandably, sparking concerns among its users. But did OpenAI and Musk really create Frankenstein’s monster? Or are anxieties around the smart technology overblown?
Is OpenAI’s ChatGPT-4 Really a Threat to Our Safety?
Earlier this month, OpenAI launched GPT-4 — an upgraded model that boasts a number of “human level” capabilities including image input and enhanced coding and translation.
As GPT technology advances at a rapid pace, the newer model offers even more opportunities to its users, including helping teachers create personalized learning experiences for students and programmers create mobile create apps with greater ease.
The benefits tools like GPT have on the business landscape have been well documented too, with the smart tool being used by a growing number of workers to automate a number of core processes from email production to keyword research.
However, as the technology spreads like wildfire, the tool has been accused multiple times of causing real-world harm – from issuing incorrect medical advice to doctors to helping cybercriminals launch successful phishing attacks.
Even OpenAI themselves admit that the chatbot can write “plausible-sounding but incorrect or nonsensical answers” and that they are “a little bit scared” of the tool being used for nefarious purposes like large-scale disinformation campaigns and offensive cyberattacks.
Ultimately, the applications of tools like GPT depends on the user’s intent. But as we head further into unknown territory, pausing to reflect on its potential impact seems like a sensible move.