Some of the most widely-used AI agents and assistants in the world, including ChatGPT, Microsoft Copilot, Gemini, and Salesforce’s Einstein, are vulnerable to being hijacked with little to no user interaction, new research from Zenity Labs claims.
Reportedly, hackers can easily gain access to and exfiltrate critical data, manipulate workflows, and even impersonate users, with relative ease. It’s understood that attackers could also gain memory persistence, which essentially grants long-term access and control to compromised data.
The findings will concern technology chiefs everywhere, who have already indicated that cybersecurity is their top concern in 2025. And with a lot of employees using AI in secret, its security gaps may be more numerous than many senior leaders think.
AI Agents “Highly Vulnerable” to Hacking, Research Shows
A new report from Zenity Labs outlines how some of the most popular AI agents on the market are vulnerable to exploitation by bad actors.
During a presentation at the Black Hat USA cybersecurity conference, researchers revealed that the platforms in question all demonstrated serious security weaknesses.
This just in! View
the top business tech deals for 2025 👨💻
They showed that once hackers get access to these AI agents, they can exfiltrate sensitive data, manipulate workflows, and potentially even impersonate users. It is thought that they may even be able to gain memory persistence, which would give them long-term control and access.
Greg Zemlin, product marketing manager at Zenity Labs, said: “They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior. This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”
Findings Shed Light on Numerous Security Loopholes
Zenity Labs set out to establish how attackers could utilize zero-click exploits to compromise leading AI agents. Among the findings, the company concluded that:
- ChatGPT can be hacked with an email-based prompt injection, giving attackers access to connected Google Drive accounts.
- Copilot leaked entire CRM databases through its customer-support agent.
- Einstein can be manipulated to reroute customer communications to different email accounts, giving attackers access to login information.
- Both Gemini and Copilot can be manipulated into targeting users with social-engineering attacks.
Upon discovering these vulnerabilities, Zenity Labs notified the companies concerned, which acted to patch the flaws and introduce long-term safeguards to ensure that the problems don’t recur. A spokesperson for Google stated: “Having a layered defense strategy against prompt injection attacks is crucial.” Unfortunately, that wasn’t enough to deter a recent data breach through the Salesforce CRM.
Companies Must Act Now to Avert Catastrophe
The findings from Zenity Labs will certainly ruffle some feathers in the AI world. Increasingly, AI agents are becoming a staple of the modern workplace, with companies investing heavily in their strategies and employees right across the business leveraging the latest tools to streamline their operations.
In our report, The Impact of Technology on the Workplace, we spoke to professionals across the business sector to get a better idea of how technology was shaping their working habits. Among our findings, we learned that just 27% of businesses had implemented policies to strictly limit the kind of data that can be shared with AI models.
It’s a worrying combination: not only are companies failing to introduce appropriate safeguards, but the AI tools themselves have obvious security vulnerabilities. With adoption continuing apace, businesses everywhere face a race against time to bed in strict governance policies — or they risk ending up as another data breach statistic.