This article is for informational purposes only and does not constitute an endorsement of any vendor, product, or security framework. Cisco, Gravitee, CyberArk, Bessemer Venture Partners, and other named commercial entities are sources whose findings, products, and frameworks may serve their own commercial interests in the AI security market; the publisher has received no compensation from any party named herein. Conduct your own due diligence before procurement decisions.

The Workforce That Nobody Trusts Yet

Enterprise AI has entered an awkward adolescence. The technology is powerful enough to automate complex workflows, yet too unpredictable for most organizations to let it operate unsupervised. At the RSA Conference in San Francisco this March, Cisco quantified the discomfort: in a survey of major enterprise customers, 85% reported experimenting with AI agents, but just 5% had moved agentic technology into production. That 80-point chasm between pilot and production is not a technology problem. It is a trust problem — and it is reshaping how the industry thinks about security architecture.

From Chatbots to Actors: Why the Risk Profile Changed

The previous generation of enterprise AI — retrieval-augmented chatbots, summarization tools, copilots — operated within a narrow band: they answered questions or suggested edits. Their blast radius was limited to bad text. Agentic AI is fundamentally different. Agents book meetings, execute code, query databases, transfer funds, and increasingly spin up other agents to delegate subtasks.

Jeff Schultz, Cisco's SVP of Portfolio Strategy, captured the shift in a single sentence: "With chatbots, we worried about what they would say. With agents, we worry about what they do," he told CX Today.

That distinction matters enormously for security teams. A hallucinated answer is embarrassing; an autonomous agent that exfiltrates customer data or approves unauthorized transactions is an existential risk. According to a Gravitee survey of more than 900 executives and technical practitioners, 88% of organizations reported confirmed or suspected AI agent security incidents in the past year, with the healthcare sector reaching 92.7%.

The Three Gaps Holding Enterprises Back

1. The Visibility Gap

Most enterprises cannot answer a basic question: how many AI agents are currently running in our environment? The Gravitee report found that over 50% of all agents operate without security oversight or logging. Only 14.4% of organizations send agents to production with full security or IT approval. Shadow AI is not a hypothetical — it is the default operating mode.

The consequences are tangible. IBM's 2025 Cost of a Data Breach Report, cited by Bessemer Venture Partners, found that shadow AI breaches carry an average cost of $4.63 million — $670,000 more than standard breaches.

2. The Identity Gap

Human employees have badges, permissions, audit trails, and managers. Most AI agents have none of these. According to Gravitee, only 21.9% of teams treat AI agents as independent, identity-bearing entities with their own access scopes. The rest rely on workarounds: 45.6% use shared API keys for agent-to-agent authentication, and 27.2% depend on custom hardcoded authorization logic.

This is the structural root of the trust deficit. When an agent acts, security teams often cannot determine which human authorized it, which tools it accessed, or whether its permissions were appropriate for the task. As CyberArk's analysis noted, "every AI agent is an identity" — but enterprises are not treating them that way.

3. The Ownership Gap

Who is responsible when an AI agent misbehaves? Cisco's research revealed a striking fragmentation: 29% of organizations assign the CISO, 27% the CIO or IT organization, 24% a central AI committee, and 11% admit no one clearly owns agentic AI security. Nearly 60% of security leaders view security concerns as the primary barrier to adoption, yet fewer than a third rank securing agentic AI among their top three priorities for the coming year.

The misalignment is revealing. Security is cited as the obstacle, but not prioritized as the work. That gap between recognizing a problem and resourcing its solution explains much of the 85-to-5 deployment stall.

Cisco's Zero-Trust Blueprint: Treating Agents Like Employees

Cisco's RSA announcements represent the most comprehensive attempt yet to impose organizational discipline on AI agents. The strategy rests on three pillars that mirror how enterprises manage human employees.

Pillar 1: Identity and Accountability

Through Duo IAM and Cisco Identity Intelligence, agents are registered in a centralized directory and mapped to human owners. Every autonomous action generates an audit trail tied to a specific person. The principle is straightforward: no agent acts without a chain of accountability back to a human employee.

Pillar 2: Least-Privilege Access at the Tool Level

Cisco extends its Secure Access SSE platform with MCP (Model Context Protocol) policy enforcement. Rather than granting agents broad permissions, the system issues short-lived, tool-specific tokens — just-in-time, just-enough, and just-long-enough access. Tom Gillis, Cisco's SVP of Infrastructure and Security, illustrated the limitation of traditional rules: "If I write a hard coded rule that says, 'don't buy a Porsche,' the agent will say, 'okay, I'll buy a McLaren,'" he explained to CX Today. Context-aware, time-bound permissions aim to solve what static rules cannot.

Pillar 3: Pre-Deployment Hardening and Runtime Guardrails

AI Defense: Explorer Edition offers dynamic red teaming — multi-turn adversarial testing that probes for prompt injection, jailbreaks, and unintended behavior before agents reach production. The tool integrates directly into CI/CD pipelines via GitHub Actions, GitLab, and Jenkins, treating agent security testing as a build step rather than an afterthought.

At runtime, the Agent Runtime SDK provides guardrails compatible with major frameworks — AWS Bedrock AgentCore, Google Vertex Agent Builder, Azure AI Foundry, and LangChain — ensuring enforcement follows agents regardless of where they are built or deployed.

The SOC at Machine Speed

Cisco is not just securing agents — it is deploying them. Six specialized AI agents built into Splunk address different SOC functions: Detection Builder, SOP Agent, Triage Agent, Malware Threat Reversing Agent, Guided Response Agent, and Automation Builder Agent. The Detection Studio and Malware Threat Reversing Agent are already generally available, with the remainder rolling out through June 2026.

The open-source DefenseClaw framework — integrating a Skills Scanner, MCP Scanner, AI Bill of Materials, and CodeGuard — extends this approach beyond Cisco's ecosystem. A planned integration with NVIDIA's OpenShell signals that the security tooling is designed for the broader agent ecosystem, not just Cisco customers.

The Regulatory Signal: NIST Weighs In

Cisco is not operating in a vacuum. In January 2026, NIST's Center for AI Standards and Innovation published a Federal Register notice seeking public input on securing AI agent systems. The request specifically highlighted adversarial attacks, indirect prompt injection, data poisoning, and specification gaming as priority risks — and recommended least-privilege and zero-trust architecture as mitigations.

By February, NIST had escalated to a formal AI Agent Standards Initiative organized around three pillars: industry-led standards development through ISO/IEC, community-driven open-source protocol work co-invested with the NSF, and fundamental research into agent security and identity infrastructure.

The pace is notable. Moving from an open request for information to a structured standards initiative in six weeks suggests that regulators see the agent security gap as urgent — and that voluntary industry frameworks alone may not close it fast enough.

The Confidence Paradox

Perhaps the most troubling finding across the research is what Gravitee calls the confidence paradox: 82% of executives believe their existing policies protect against unauthorized agent actions, even as 88% of organizations report actual or suspected incidents. That gap between perceived security and experienced reality is dangerous because it suppresses urgency.

A McKinsey red-team exercise, described by Bessemer Venture Partners, underscores the point: an autonomous agent compromised an internal AI platform in under two hours. The threat is not theoretical, and the defenses most organizations believe they have in place are not calibrated for autonomous actors.

Fernando Montenegro, VP and Practice Lead at Futurum, framed the structural issue: "Legacy tools designed for human users create uneven enforcement and blind spots," he noted in coverage of Cisco's announcement. Security architectures built for a world where every action traces to a human keystroke are fundamentally mismatched to a workforce that includes autonomous software.

What Comes Next: The Production Threshold

Cisco's research contains a telling detail: the AI agents that have made it to production are "almost entirely internal-facing" — handling IT operations, security workflows, financial analysis, and R&D. Customer-facing agents remain overwhelmingly in pilot. The pattern suggests that enterprises are willing to accept agent autonomy where the blast radius is contained, but not where a failure is visible to the outside world.

This is a rational response, but it creates a ceiling. The transformative economic value of agentic AI — the kind that justifies the investment — requires agents that interact with customers, partners, and external systems. Getting there demands a security architecture that can extend trust beyond the internal perimeter.

Regional data from Cisco's survey hints at where that threshold may be crossed first. North America leads with 61% of organizations in piloting or production, followed by APJC at 53% and EMEA at 48%. Financial services and technology sectors are furthest ahead — industries with both the resources to invest in security infrastructure and the competitive pressure to deploy agents at scale.

Key Takeaways

  • The deployment gap is a trust gap. Cisco's 85%-to-5% finding reflects security concerns, not technological limitations. Enterprises can build agents; they cannot yet govern them.
  • Identity is the linchpin. Fewer than a quarter of organizations treat agents as identity-bearing entities. Without agent-level identity, access control, and audit trails, production deployment remains too risky for most enterprises.
  • Zero trust must extend to non-human actors. Cisco's blueprint — mapping agents to human owners, enforcing time-bound tool-level permissions, and hardening agents before deployment — offers a concrete framework, but adoption will take time.
  • Regulators are accelerating. NIST moved from an open RFI to a formal standards initiative in six weeks, signaling that the window for voluntary industry self-governance may be narrowing.
  • Internal-first is the bridge. Organizations are building confidence with low-blast-radius internal deployments. The real test comes when agents face customers and external systems.

Disclaimer

This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.