Zero Trust for AI Agents: How Cisco Is Securing the Agentic Workforce
Here's a stat that should make you uncomfortable: 85% of organizations are actively testing AI agents, but only 5% have deployed them at scale. The gap? Security. Nobody trusts their agents enough to let them loose.
At RSAC 2026, Cisco unveiled a Zero Trust architecture built specifically for autonomous AI agents β not the humans using them, but the agents themselves. Meanwhile, NIST is writing federal security standards for AI agents, and OWASP published a new Top 10 just for agentic applications. The message is clear: if you're building AI agents, security can't be an afterthought.
π What You'll Need
- Understanding of AI agents β autonomous systems that take actions via APIs and tools
- Basic security concepts β authentication, authorization, least privilege
- Familiarity with Zero Trust β the "never trust, always verify" security model
- Optional β experience with MCP (Model Context Protocol) for agent-to-tool communication
π€ Why AI Agents Break Traditional Security
Traditional Zero Trust works like this: a user authenticates, gets a token with fixed permissions, and operates within those boundaries for the session. Predictable. Auditable. Manageable.
AI agents shatter every one of those assumptions.
Agents are non-human identities (NHIs) that now outnumber human users 100-to-1 in enterprise environments. They don't follow predictable session patterns. They plan and execute autonomously. They chain tools together in ways nobody anticipated at auth time.
The core problem breaks down into three dimensions:
| Dimension | Human User | AI Agent |
|---|---|---|
| Identity | Known employee, SSO login | Service account, API key, delegated credential |
| Access | Predictable scope, role-based | Dynamic, context-dependent, potentially unbounded |
| Behavior | Reviewable actions at human speed | Autonomous decisions at machine speed |
A human clicks a button, waits for a response, reads it. An agent can chain 50 API calls in 3 seconds, accessing systems its creator never intended. Overly permissive access that's a minor risk for a human becomes a major incident when an agent exploits it at machine speed.
π‘οΈ What Cisco Announced at RSAC 2026
Cisco's approach covers four pillars: identity, hardening, runtime enforcement, and SOC tooling.
Zero Trust Access for AI Agents
The headline feature extends Cisco's Zero Trust framework to treat AI agents as first-class identities. Key components:
- Cisco Duo IAM β agents get enterprise-grade identities (not just shared API keys), each tied to an accountable human employee
- MCP policy enforcement β policies are enforced at the Model Context Protocol layer, governing agent-to-tool communication
- Intent-aware monitoring β Cisco Secure Access evaluates not just what an agent requests, but why
DefenseClaw (Open-Source)
This is the most developer-relevant piece. DefenseClaw is an open-source secure agent framework that bundles:
| Component | What It Does |
|---|---|
| Skills Scanner | Scans every agent skill before execution |
| MCP Scanner | Verifies every MCP server the agent connects to |
| AI BoM | Auto-generates an AI Bill of Materials β inventory of all AI assets |
| CodeGuard | Code scanning and sandboxing for agent-generated code |
DefenseClaw hooks into NVIDIA's OpenShell for runtime-level security. When a threat is detected:
- Risky operations are blocked within 2 seconds
- Sandbox permissions are revoked
- Files are quarantined
- Blocked MCP servers are removed from the network allow-list
AI Defense
Pre-deployment tools for model hardening and self-service red teaming. Test your agents against adversarial attacks before they hit production.
Agentic SOC Tools
Machine-speed detection and response for security operations centers. Because when an agent goes rogue, humans can't respond fast enough manually.
π― The OWASP Agentic Top 10 (2026)
OWASP assembled 100+ experts to define the top security risks for agentic applications. If you're building agents, treat this as your working checklist.
| Rank | Risk | What Happens |
|---|---|---|
| π₯ ASI01 | Agent Goal Hijack | Attackers redirect agent objectives via manipulated instructions or tool outputs |
| π₯ ASI02 | Tool Misuse & Exploitation | Agents misuse legitimate tools due to prompt injection or unsafe delegation |
| π₯ ASI03 | Identity & Privilege Abuse | Exploiting inherited credentials, cached tokens, or agent-to-agent trust chains |
| ASI04 | Supply Chain Vulnerabilities | Malicious or tampered tools, MCP servers, models, or agent personas |
| ASI05 | Unexpected Code Execution | Agents generate or execute attacker-controlled code |
| ASI06 | Memory & Context Poisoning | Persistent corruption of agent memory, RAG stores, or knowledge bases |
The scariest entry: Memory & Context Poisoning (ASI06). Unlike standard prompt injection that ends when the session closes, poisoned memory persists across sessions. The agent "learns" the malicious instruction and recalls it days or weeks later.
Researchers call the combination of prompt injection + tool access + persistent memory the "Lethal Trifecta" β agents that can be permanently compromised and act as insider threats.
π Prompt Injection: Still the #1 Threat
OWASP classified prompt injection as the single highest-severity vulnerability for deployed language models. The numbers in 2026:
- 73% of production AI deployments were hit by prompt injection in 2025
- 80%+ of attacks are indirect injection β embedded in documents, emails, web pages, or database content
- CVE-2025-53773: A hidden prompt injection in pull request descriptions enabled remote code execution with GitHub Copilot β CVSS score 9.6
That last one deserves emphasis. An attacker put malicious instructions in a PR description. When a developer used Copilot to review the PR, the agent executed arbitrary code. The attack surface wasn't the agent itself β it was the data the agent consumed.
This is why Cisco's MCP Scanner and Skills Scanner matter. Verifying the integrity of every tool and data source an agent touches is no longer optional.
ποΈ NIST AI Agent Security Standards
The U.S. National Institute of Standards and Technology launched its AI Agent Standards Initiative in February 2026, built on three pillars:
- Industry-led standards β technical convenings and gap analyses with major vendors
- Interoperability protocols β MCP and A2A identified as baselines, targeting an AI Agent Interoperability Profile by Q4 2026
- Fundamental research β agent authentication, identity infrastructure, and security evaluations
Key Deliverables
| Deliverable | Status |
|---|---|
| RFI on AI Agent Security | Closed March 2026 |
| Agent Identity & Authorization Concept Paper | Published April 2026 |
| SP 800-53 control overlays for agentic systems | In development |
| AI Agent Interoperability Profile | Target Q4 2026 |
The SP 800-53 overlays are the big one for enterprise teams. SP 800-53 is the federal security controls catalog β once agentic overlays ship, every government contractor and regulated enterprise will need to comply.
NIST's framework aligns directly with Cisco's approach: identity-first security, least privilege, continuous behavioral monitoring, and full auditability of agent decisions.
βοΈ Cisco vs Microsoft vs Google: Who's Doing What
It's not just Cisco. Every major cloud vendor announced AI agent security products at RSAC 2026.
| Cisco | Microsoft | Google Cloud | |
|---|---|---|---|
| Product | Zero Trust for AI Agents + DefenseClaw | Zero Trust for AI (ZT4AI) + Agent 365 | Agentic SOC |
| Approach | Agent identity via Duo, MCP policy enforcement, runtime scanning | Full AI lifecycle coverage, Defender/Entra/Purview integration | Gemini-powered threat investigation |
| Open source | DefenseClaw (agent framework) | Agent Governance Toolkit (GitHub) | β |
| Unique strength | MCP-native security, 2-second runtime blocking | OWASP Agentic Top 10 coverage, Fortune 500 reach | AI-for-security (agents defending, not being defended) |
| GA timeline | AprilβJune 2026 | Agent 365 GA May 1, 2026 | Available now |
Microsoft's Agent 365 is a control plane that gives IT, security, and business teams visibility into agent activity. Their open-source Agent Governance Toolkit covers policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering.
Google's angle is different: they're focused on using agentic AI for security operations (AI agents that investigate alerts and gather evidence), rather than securing the agents themselves.
The startup space is also exploding β $392 million in agentic AI security funding was announced in the two weeks surrounding RSAC 2026. Geordie AI won "Most Innovative Startup" at the RSAC Innovation Sandbox.
π¨ What Developers Should Do Now
You don't need to wait for NIST standards or buy Cisco products. Here's what you can do today:
1. Treat Agents as First-Class Identities
Stop using shared API keys and service accounts. Give each agent its own identity with scoped permissions. Tie every agent to an accountable human owner.
2. Apply Least Privilege Ruthlessly
# Bad: Agent has access to everything
agent = Agent(
tools=[database, email, slack, github, payments, admin_panel],
permissions="full_access"
)
# Good: Agent has only what it needs for this task
agent = Agent(
tools=[database.read_only(), slack.post_to_channel("engineering")],
permissions="scoped",
max_actions_per_minute=10,
require_approval_for=["delete", "payment", "admin"]
)
3. Scan Your MCP Servers
If you're using MCP, audit every server your agents connect to. Consider using Cisco's open-source DefenseClaw MCP Scanner or building your own verification layer.
4. Maintain an AI Bill of Materials
Know exactly which models, tools, MCP servers, and data sources your agents use. If a vulnerability is discovered in any component, you need to know which agents are affected.
5. Use the OWASP Agentic Top 10 as a Checklist
Review every agent, integration, and data path against the six known failure patterns. Pay special attention to:
- Indirect prompt injection via external data
- Memory poisoning in persistent agent contexts
- Supply chain attacks through third-party MCP servers
π Troubleshooting Common AI Agent Security Issues
| Problem | Solution |
|---|---|
| Agent accessing resources beyond its scope | Implement action-level authorization, not just session-level tokens |
| Prompt injection via external documents | Sandbox all external inputs; use content scanning before agent processing |
| Agent credentials leaking in logs | Use short-lived tokens, rotate frequently, never log full credentials |
| Can't audit what an agent did | Implement structured logging: every tool call, every decision, every data access |
| Third-party MCP server compromised | Maintain an AI BoM, verify server integrity before each connection, use allow-lists |
π What's Next
- π Audit your current agents β use the OWASP Agentic Top 10 as a checklist today
- π οΈ Try DefenseClaw β Cisco's open-source framework is free and covers MCP scanning, skill verification, and runtime sandboxing
- π Watch NIST SP 800-53 overlays β when they ship, they'll become the compliance baseline for regulated industries
- ποΈ Secure your MCP servers β read our guide on building custom MCP servers and add authentication
- π Understand the threat landscape β our AI agent security risks guide covers the fundamentals
Building AI agents? Make sure they're secure β start with our AI coding agent security risks guide and learn how agents connect to tools via Model Context Protocol.