The hits keep coming for cybersecurity as we head into a new year. For starters, one report recently found that 48% of cybersecurity leaders failed to report data breaches in the past year, due in part to fears of punitive responses.
At the same time, the US Department of Defense stated in October 2025 that it plans to cut back on several types of cybersecurity training, including annual training that one expert has called “critical.”
We wanted to check in with a wide range of tech experts and C-Suite executives to see if we could figure out what’s coming down the pike for IT teams in 2026.
Artificial intelligence continues to be the buzziest technology around, so it’s little surprise that the large majority of our responses were guessing how AI would impact digital security. Read on to learn about the most interesting trends ahead of time, from shadow AI challenges to autonomous agents to AI-powered malware.
Top Cybersecurity Predictions for 2026:
- IT Systems Will Need Observability And Visibility More Than Ever
- AI Must Be Attributed – Or Risk Being Mistaken For Compromise
- The Rise of Shadow AI
- Regulatory Scrutiny Comes for AI-Created Software
- Cybersecurity Becomes a Core Data Competency
- AI-enabled Malware Will Surge This Year
IT Systems Will Need Observability And Visibility More Than Ever
The term high availability, or HA, refers to an IT system that’s almost always accessible and reliable for use. The goal is to be as close to 100% reliable as possible, and in 2026, that goal’s tougher than ever.
According to Cassius Rhue, Vice President of Customer Experience at SIOS Technology Corp., observability and visiblity will be essential for keeping complex IT environments up and running. Why is visibility so important these days? The sheer sprawl of modern work.
“As IT infrastructures expand across on-premises, cloud, hybrid, and multi-cloud environments, visibility into application performance and health and interdependencies of the elements of the IT stack will become mission-critical,” Rhue says.
“In 2026, observability will emerge as a key differentiator for HA solutions, allowing IT teams to identify and resolve issues before they impact uptime.” -Cassius Rhue, Vice President of Customer Experience at SIOS Technology Corp.
This year, Rhue tells us, the most successful HA platforms will be those that provide deep insights across the full stack—from hardware to application layer.
AI Must Be Attributed – Or Risk Being Mistaken For Compromise
Here’s a scenario for you: Your IT team discovers an identity in its environments that no one has explicitly created and can’t be accounted for.
Until a few years ago, that was an obvious sign that your security had been breached. Today, however, it might just mean that you haven’t kept track of what an AI agent was up to, and it created the identity thanks to its autonomous workflow.
“This creates attribution challenges,” says Paul Nguyen, co-founder and co-CEO at the security platform company Permiso.
“When an identity behaves suspiciously, security teams need to determine: Is this legitimate autonomous behavior or is it compromise? Without clear attribution (human owner, approval chain, business justification), this determination becomes difficult.”
Nguyen foresees that organizations will start to implement attribution tracking to keep track of AI-generated identities. AI agents will need to log their decisions, and tag their creations with their name, timestamp, and business justification.
“By year-end 2026, identity attribution becomes a prerequisite for secure agentic AI deployment.” -Paul Nguyen, co-founder and co-CEO of Permiso
Without this extra regulation, security teams won’t have the audit trails they’ll increasingly need to do their jobs.
The Rise of Shadow AI
The term “shadow AI” refers to any unauthorized employee use of AI tools in the process of doing their jobs. The AI work is powering the company, but the company isn’t aware of the type of tool, time spent, or other variables.
Needless to say, shadow AI use isn’t good news for a company, and it’s a big concern that has driven companies to adapt AI policies over the last few years, to let their employees know what is and isn’t allowed.
Monica Landen, CISO at Diligent, predicts that shadow AI will continue to take root across the new year, despite the significant governance and security risks it poses.
“As investments in AI surge, with a projected increase of 40% in 2026, organizations must take proactive measures to mitigate these security risks and ensure their workforce is aligned with organizational AI policies,” Landen says.
How can companies counter this issue? Landen recommends AI governance policies, lessons to educate employees on the risks and benefits of AI use, and efforts to monitor AI use.
For a hand, see our guide on How to Create an AI Policy
Regulatory Scrutiny Comes for AI-Created Software
“2025 proved that AI can write code faster than ever, but not necessarily safer,” notes Bhagwat Swaroop, CEO of Sonatype.
Some of the AI-driven cybersecurity concerns that Swaroop flags include unverified dependencies, outdated packages, and security gaps that traditional testing won’t catch. Add in shadow AI, and building securely is a bigger challenge than ever.
The response? A push for greater transparency on software that has been developed by LLMs or AI agents, and one that comes with more regulation.
“AI Bills of Material will enter C-level conversations,” Swaroop predicts, while “the EU AI Act and Cyber Resilience Act will push for full transparency in software development, including AI-generated content.”
Organizations can prepare by tracking code provenance and validating model inputs across development lifecycles.
Cybersecurity Becomes a Core Data Competency
John Morris, CEO of Ocient, has yet another way that AI will be impacting cybersecurity: The exponentially-growing data loads fuelled by AI’s real-time analyses will push cybersecurity to a “critical inflection point,” he says.
Organizations will need to hustle to upgrade their infrastructure in order to handle these high-velocity data environments.
“Companies will begin treating cybersecurity as a core data competency, not just an IT concern. In 2026, protecting the data means protecting the business. As the data load snowballs, so too will the urgency to secure it at scale.” – John Morris, CEO of Ocient
Let’s hope companies heed this warning: Cybersecurity strategies must be prioritized in a world where data use is sky-rocketing and threats are emerging with every new system integration.
AI-enabled Malware Will Surge This Year
The bad actors of the software world will also invest in AI in 2026: According to LastPass Cyber Threat Intelligence Analyst Stephanie Schneider, more and more threat actors will deploy AI-enabled malware in active operations this year.
Google’s Threat Intelligence team has found that this type of malware can “generate scripts, alter codes to avoid detection and create malicious functions on demand,” Schneider explains.
“Nation-state actors have used AI-powered malware to adapt, alter and pivot campaigns in real-time, and these campaigns are expected to improve as the technology continues to develop,” she adds.
With the malicious technology only set to be become more powerful across the coming year, it’s another sign that cyber defenders will have a challenging threat landscape to navigate.
We say it every year, but it still holds true in 2026: Cybersecurity teams are continuing to compete with threat actors in an ever-escalating arms race. AI may be a powerful tool, but it’s just the latest one in a long line of threats to IT environments everywhere.