Key Takeaways
- Google found the first known use of AI to uncover and exploit a zero-day security flaw.
- AI use was confirmed due to a hallucinated CVSS score, amid other indicators.
- Google has already disclosed the vulnerability to the impacted vendor.
Google has just reported the first (known) example of hackers using AI to uncover and exploit a zero-day security flaw.
It’s a sign that the AI threat many experts predicted is officially here for the cybersecurity industry: AI tools can help bad actors discover new pathways towards their goals.
Here are the details to know.
How AI Developed a Zero-Day Exploit
The news comes from a freshly released report by the Google Threat Intelligence Group (GTIG) that follows up on the group’s February 2026 report covering AI-related threat activity.
Now, they say, they have for the very first time “identified a threat actor using a zero-day exploit that we believe was developed with AI.” Specifically, it was within a Python script that let the user bypass two-factor authentication on “a popular open-source, web-based system administration tool.”
This just in! View
the top business tech deals for 2026 👨💻
The report says that it likely stopped this particular threat:
“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use. Threat actors associated with the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea (DPRK) have also demonstrated significant interest in capitalizing on AI for vulnerability discovery.” – GTIG
For hackers, the report concludes, AI tools represent “expert-level force multipliers for vulnerability research and exploit development.”
How Google Identified the Use of AI
You might be wondering what makes Google so confident that the hackers relied on AI specifically to uncover the exploit itself – after all, the code didn’t appear to have included the word “delve.”
However, it did include other tell-tale signs, Google says.
“[T]he script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class).” -GTIG
The threat was stopped, as GTIG worked with the impacted vendor to disclose the problem.
And don’t worry, Google fans: Their team says that they “we do not believe Gemini was used” for this particular process.
The Age of AI-Powered Cybercrime
AI agents and other LLM-powered tools have been making waves across all areas of business security for years now.
That’s not all good news. Recently, Anthropic’s Cursor wiped a company’s entire database within just a few seconds — before then issuing an apology.
That’s not stopping the rollout of even more AI. At least, not according to a new Gartner prediction that claims the average global Fortune 500 company will have over 150,000 AI agents by 2028.
For cybersecurity professionals, increasing AI use means that criminals and security experts alike have a new set of tools to help each group attempt to one-up the other.
We can only hope the bad actors are at least as likely to accidentally delete their entire database as all the law-abiding professionals.