When it comes to cybersecurity, the only constant is change. Sure, we’ve heard that before, right? But recent developments have taken that notion and turned it up to 11. As AI has moved from merely helpful to autonomous in the form of AI agents, suddenly, AI isn’t just helping folks write emails they don’t feel like writing. It’s now capable of making decisions and taking actions on its own, and operating with a degree of independence that fundamentally alters the threat model for every managed service provider (MSP), managed intelligence provider (MIP) and the clients they support.
The past year has delivered several unmistakable signals that keeping businesses secure is about to get more challenging: the first documented AI‑orchestrated cyberattack, the rise of shadow agents and the rapid escalation of AI‑driven social engineering. For our partners, these are very real risks that will shape service delivery, risk management and client expectations throughout 2026.
Let’s take a look at the top security trends happening right now in the agentic AI era.
The New Shadow IT Problem: Shadow Agents
In 2025, partners spent a lot of their AI‑related energy trying to rein in shadow AI — that is, employees getting sneaky and pasting sensitive data into unapproved tools. That problem hasn’t gone away. While 92% of respondents to a survey of governance, risk and compliance (GRC) and audit professionals said they’re confident in their visibility into third-party AI use, a recent industry report showed more than 60% of organizations still lack mature AI governance.
But 2026 introduces a more dangerous variant: shadow agents. Sounds like some sort of 007 situation, right? Sure, but not in a good way.
A shadow agent is an autonomous workflow or system created by an employee — often with good intentions — using personal accounts, low‑code automation platforms or unvetted APIs. These agents come with a load of baggage — excessive permissions, no audit trail and no lifecycle management. That means they pass by undetected, past your stack and all your controls.
Unlike plain ol’ shadow AI, these systems can take action, instead of just leaking data. In fact, as much as $670k is added to breach costs when shadow AI is involved.
AI‑Orchestrated Cyberattacks Will Likely Increase
The clearest sign things have changed came in late 2025, when Anthropic disclosed the first documented case of a large‑scale cyber‑espionage campaign executed primarily by an AI system: GTG-1002 (no, that’s not a new Terminator). The attackers, which were assessed to be a state‑sponsored group, used Claude Code as an autonomous operator to attempt performing reconnaissance, generating exploit code, harvesting credentials and exfiltrating data across roughly 30 global targets.
Anthropic reported the AI executed 80–90% of the intrusion activity autonomously. Human beings only had to step in at a handful of decision points. That’s scary business. It means the agentic AI threat has moved from the hypothetical to the very real.
For partners, that means some immediate implications:
- Attackers can now scale operations based on compute, not human talent.
- The time between vulnerability discovery and exploitation is collapsing.
- AI guardrails can be bypassed through prompt manipulation and breaking down tasks into smaller, chainable tasks.
Social Engineering Reaches a Turning Point
While autonomous exploitation captures headlines, the most immediate risk for clients is the transformation of social engineering, or manipulating people into divulging confidential information to commit fraud.
AI‑generated impersonation attacks have become a primary vector for high-value intrusions. In fact, projections had deepfakes jumping from 500,000 detected instances in 2023 to 8 million in 2025 (a 1,500% increase).
Recent incidents have involved real‑time deepfake video calls featuring multiple fabricated executives, which would have sounded like something from a movie two years ago, but here we are. Imagine your bosses call you up and demand access to a file right away. Who is thinking about fraud in that moment? You’re just thinking about covering your butt. Boom! Threat actors are in.
For MSPs, this means user training has to go beyond “spot the typo.” Identity verification must be multifactor and multichannel.
MSPs should treat AI agents similarly to humans, using verifiable agent IDs, decentralized identifiers and zero‑trust, context‑aware access controls to manage their autonomous behavior. They should also implement fine‑grained authorization and continuous monitoring to securely govern agent actions and delegation patterns across multi‑agent systems.
Prompt Injection: The Weakest Link in Agentic Workflows
Even well‑intentioned agents introduce new risks. They can fall victim to prompt injection, where threat actors hijack agentic workflows, manipulate memory and redirect autonomous actions. Large language models (LLMs) can’t always tell the difference between legitimate instructions and malicious inputs because they both look like normal text. Plus, script-kiddie-level threat actors can now use open-source agentic tooling to automate reconnaissance, exploitation and lateral movement.
For MSPs and MIPs, this means you really need to be on your toes, as any old agent that reads untrusted content can be a potential attack surface. You need to make permission boundaries strict and explicit, and it’s essential to establish isolation between perception and action.
Governance Becomes a New Core Offering for MSPs
The agentic era demands a new approach to governance, and most clients can’t do it alone. MIPs will need to define, deliver and enforce frameworks that maintain a level of governance that keeps the bad agents at bay.
A strong agentic governance framework should answer questions such as:
- Who is allowed to build agents?
- What permissions can agents have?
- How are agent actions logged and audited?
- How are agents tested, validated and updated?
- How do we prevent agent sprawl?
Forget getting to market fastest with your agentic offerings — anyone can do that. Those who build governance offerings first will be those who lead the market. Those who don’t, will spend the next few years cleaning up after clients who unknowingly built their own attack surfaces.
The Case for Vetted Agents
Bespoke, employee‑built agents are hard to secure at scale. Professionally maintained agents offer a more sustainable path forward. They benefit from centralized testing, penetration analysis, updated pipelines and lifecycle management.
For partners, buying agents represent a safer alternative to shadow agents, presenting a new revenue stream and way to forge ahead in the agentic universe in a way that’s standardized. Remember: Your credibility and reputation are more important than a quick buck.
Forging Ahead
The agentic AI era is here, whether we like it or not. As IT providers in charge of clients’ cybersecurity health, our jobs are to make sure their agents are well-designed and that governance is implemented correctly to minimize any harm an agent might cause, be it malicious or by accident. The good news is that well‑designed agents can deliver extraordinary value, just as long as they’re built and governed responsibly.
We should expect some bumps in the road during this time. But there’s lots you can do to make the journey smoother, from building strong governance frameworks and standardizing agent deployment to offering curated agent solutions, monitoring for shadow agents and preparing for AI‑accelerated threats.
Those who do the agentic thing correctly will see a ton of opportunity come their way. Those who barrel forward with untested agents or poor governance? Good luck, my friends.
But hey, we’ve got something better than luck! It’s called the Managed Intelligence Provider Playbook, and it’s got monetization strategies and best practices to transform from MSP to MIP. And you can always reach out to our team to discuss ways to navigate the agentic era with greater peace of mind. It’s a wild world out there right now, but we’re here to help!


