If you’re using ChatGPT’s recently launched plugins, you might want to think carefully about the security implications.
GPTs, launched by OpenAi in November 2023 as a way for users to develop custom versions of the popular AI chatbot, contained a series of severe security flaws that allowed cybercriminals to access online accounts on third-party websites and steal user data, according to a new report.
The issues have since been reported and resolved, but the news is a sobering reminder of the vulnerability of third party GPT plugins.
ChatGPT Plugin Security Flaws Found
Security researchers at Salt Security have issued a dire warning to users of ChatGPT plugins, after discovering potentially harmful issues with GPTs on the OpenAI store.
The first of these three security flaws impacts the installation process of ChatGPT plugins. It allows hackers to install nefarious plugins without users’ consent and access their private messages, which may contain sensitive data like passwords and other credentials.
This just in! View
the top business tech deals for 2024 👨💻
Meanwhile, the second vulnerability affects PluginLab, a resource that enables developers and businesses to create their own ChatGPT Plugins. Researchers identified a flaw in several plugins that allowed hackers to take over accounts on third-party services like GitHub.
Lastly, they detected an OAuth redirection manipulation issue that meant cybercriminals could gain control of user accounts and access user credentials. This issue affected multiple plugins.
ChatGPT Vulnerabilities Discovered Last Year
Researchers at Salt Labs first discovered the first ChatGPT vulnerability on June 25th, 2023, before discussing its technical details with the AI company the following month.
In September 2023, they found vulnerabilities with the PluginLab.AI and KesemAI ChatGPT plugins and disclosed their technical details to both vendors.
According to the Salt Labs team, OpenAI, PluginLab.AI, and Kesem.ai have since fixed the security flaws impacting their products.
Yaniv Balmas, vice president of research at Salt Security, said in a statement: “Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life.
“As more organizations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data.”
Third Party Plugins a ‘Potential Weakness’
As generative AI tools like ChatGPT continue to rise in popularity for both personal and business use cases, hackers are closely watching and looking to take advantage.
Jake Moore, global security advisor at ESET, an online security software maker, urges users to be vigilant about the privacy and cybersecurity implications of AI tools.
“More people are becoming increasingly comfortable with inputting sensitive business and personal data into AI programs without giving it a second thought,” he told Tech.co.
“And as an increasing number of organisations adopt these new technologies, attackers are also shifting their strategies into seeking new methods to manipulate these tools. If threat actors are able to manipulate third party tools, such access to confidential information could be catastrophic for those involved.”
Moore advises organizations using AI tools to take a risk-averse approach and ensure proper privacy safeguards are in place. He admits, however, that third-party plugins “remain a potential weakness” and “must be treated cautiously.”