Key Takeaways
- Gartner has outlined its top cybersecurity predictions for 2026, with AI agents set to provide a new “attack surface” for hackers.
- The firm is calling for heightened governance and oversight of AI tools to reduce the potential risk.
- As it’s becoming increasingly difficult to keep track of employee AI usage, cybersecurity professionals face an unprecedented problem.
AI agents will be a key cybersecurity risk in 2026, according to new research from Gartner. The consultancy published the findings as part of a report on the top cybersecurity trends of the year, with other key trends spotlighting “regulatory volatility” and “postquantum computing.”
According to the firm, the uptake of AI agents and recent proliferation of vibe coding platforms will create new attack surfaces for cybercriminals to exploit. Gartner believes a higher level of governance and oversight is required to prevent a potential cybersecurity catastrophe.
The rapid ascent of AI has given rise to innumerable opportunities, but it has not been without its risks. As the technology is relatively new, development has not been able to keep pace with deployment. Consequently, existing safeguards are not fit for purpose, with mounting evidence that AI agents are vulnerable to exploitation.
AI Agents Open New Attack Vectors for Cybercriminals
AI agents will be a key target for cybercriminals in 2026 — and organizations need to tighten up their oversight and governance to nullify the rising threat, according to a new report from Gartner.
Published as part of a cybersecurity trends report, the institution believes that the rise of agentic AI — and particularly the explosion of no-code and vibe coding platforms — will create new “attack surfaces” for hackers to target.
This just in! View
the top business tech deals for 2026 👨💻
AI-assisted vibe coding platforms, which allow users to spin up their own apps and programs from a simple written prompt, have democratized access to app-building software — and in the process handed cybercriminals unfettered access to swathes of unsecured code.
Scale of Problem Likely Much Worse Than Feared
Businesses have a problem. Not only are agent development and adoption growing, but it’s becoming increasingly difficult to keep track of whether or not employees are actually using them.
“While AI agents and automation tools are becoming increasingly accessible and practical for organizations to adopt, strong governance remains essential. Cybersecurity leaders must identify both sanctioned and unsanctioned AI agents, enforce robust controls for each and develop incident response playbooks to address potential risks.” – Alex Michaels, Director Analyst at Gartner
Recent research suggests that as many as 28% of staff would continue to use tools like ChatGPT even if explicitly prohibited from doing so. This means that the scale of the threat is likely much worse than predicted — and action is required as a matter of urgency.
Cybersecurity Pros Face Uphill Struggle
It’s a perilous time to be in the cybersecurity business. With new technologies emerging all the time, the threat landscape is in a constant state of flux — and security personnel are having a hard time keeping up.
The yawning skills gap is only making a bad problem worse. Recent research illustrates that as many as 88% of professionals in this space have experienced a “significant” impact of the hiring and retention problem, and there doesn’t appear to be an end in sight.
In this climate, businesses are under pressure to upskill their existing employees on how to spot cyber risks. With 98% of senior leaders unable to identify all the signs of a phishing attack, this will not be an easy task — but if catastrophe is to be averted, it’s an urgent one.