Why Traditional IAM Is No Match for Agentic AI

Why Traditional IAM Is No Match for Agentic AI

Eight minutes. That’s how long it took an AWS breach to escalate from exposed static IAM credentials in a public S3 bucket to full administrative privileges inside an AWS environment. The credentials were the kind legacy IAM systems are built around: long-lived, manually rotated and sitting there for the taking.

Meanwhile, roughly two-thirds of companies are already experimenting with AI agents. That gap between adoption speed and security readiness is where breaches happen. Agentic AI demands identity governance now. Most organizations still depend on workforce IAM platforms like Active Directory, Okta and Microsoft Entra ID, systems designed for human access through passwords, SSO and role-based permissions. Those platforms were never built to govern autonomous agents.

The Architectural Mismatch Between Legacy IAM and Autonomous Agents

Workforce IAM was designed for a world where humans log in, do work and log out. Credentials last for weeks or months, permissions are tied to job titles and sessions assume someone is sitting at a keyboard. Cloud-native IAM and newer workload identity capabilities now address parts of the gap, but most organizations still run on these human-centric systems as their identity backbone. That model starts to break down when autonomous AI agents enter the picture.

As MIT Sloan explains, agentic AI “uses large language models to execute multi-step plans, use external tools, and interact with digital environments.” These agents don’t wait for prompts. They perceive, reason, act and adapt continuously, often spawning subagents to handle tasks across multiple systems simultaneously.

Static Credentials and Machine-Speed Lifecycles

Workforce IAM manages human access through passwords, SSO and MFA. Machine credentials followed a different path. Some landed in Active Directory as service accounts with static passwords and broad privileges. Others, like API keys, OAuth tokens and secrets in configuration files, proliferated with no centralized identity governance at all. Either way, those credentials persist until someone remembers to rotate them. Agents spin up, execute and terminate in seconds. As one analysis of ephemeral credentials puts it, short-lived credentials are “dynamically generated, and automatically revoked after a brief period, unlike static secrets that require manual rotation.” That mismatch is where the risk concentrates.

No Workload-Level Identity

Active Directory treats identities as persistent entities tied to employees or long-running services. Autonomous agents are transient workloads, and each instance needs its own cryptographically verifiable identity. The Cloud Security Alliance is direct about this: “Given the transient nature of AI agents, traditional identity mechanisms based on persistent credentials are inadequate. Instead, an ephemeral, workload-level identity approach is required.” When agents share service accounts, one compromised credential exposes every agent using that account and audit trails become meaningless.

Ungoverned Agent-to-Agent Delegation

When Agent A delegates authority to Agent B to complete a task on behalf of a user, legacy IAM has no way to represent that chain. An OpenID Foundation paper flags this directly: “Recursive delegation (agents spawning subagents) complicates authorization scope attenuation and increases risk.” Standard OAuth on-behalf-of patterns can support delegated access, but they were not designed to govern recursive agent-to-agent delegation with clear scope limits across multiple hops.

Authorization That Stops at Session Establishment

In many legacy IAM designs, authorization is evaluated primarily at session establishment or token issuance using static role-based access control (RBAC) or attribute-based access control (ABAC) policies. Agents need more continuous evaluation during execution. Forrester argues that “Responsible AI must evolve from periodic, reactive risk assessments to embedded, real-time governance of autonomous decision-making.” A separate MIT study found that major enterprise agentic AI platforms, including IBM’s watsonx and Alibaba’s MobileAgent, “lack documented stop options despite autonomous execution.”

The Scale of the Problem

Non-human identity proliferation has outpaced the governance infrastructure meant to manage it. Depending on the measurement methodology, vendors report non-human-to-human identity ratios ranging from 50:1 to 144:1, with that last figure representing a 44% year-over-year increase. Yet according to a Gartner report, only 44% of machine identities are under formal IAM governance.

AI agents compound this problem because they create identities at a different pace and with different access patterns than traditional workloads. A microservice might need access to two or three APIs for its entire lifecycle. An autonomous agent might call a dozen APIs in a single task, spawn subagents that each need their own credentials and then terminate minutes later. The volume of identity events per agent is higher, the credential lifetimes are shorter and the access patterns are less predictable, all of which overwhelm governance systems designed for stable, long-running services.

The 2025 Verizon DBIR found that 88% of attacks on basic web applications involved stolen credentials. IBM X-Force reported an 84% increase in infostealer email delivery. And one industry report found that 50% of organizations reported breaches tied to compromised machine identities, with API keys and TLS certificates as the primary entry points.

When attackers stole OAuth tokens from a chatbot maker in August 2025, they used them to access customer Salesforce and Google Workspace accounts. The tokens looked like legitimate access, bypassing behavioral monitoring entirely. In a separate case, Mandiant documented Russian state-sponsored operators backdooring a service principal with ApplicationImpersonation rights in Microsoft Entra ID to maintain persistent email collection across an entire organization.

In another incident, Chinese state-aligned actors used an AI model as an autonomous agent to conduct cyberattacks, with the AI performing approximately 80% to 90% of all tactical work independently. They bypassed safety guardrails through prompt engineering, not technical exploits.

Every one of these incidents exploited reusable access credentials (bearer tokens, OAuth tokens, session tokens or API keys) that can grant access without meaningful revalidation once stolen. Multifactor authentication does not stop attackers who steal and replay tokens issued after successful MFA. Bearer-token theft remains part of the threat model legacy IAM was never designed to address on its own.

What Is Emerging to Fill the Gap

Five major standards documents emerged in 2025 alone, including NIST IR 8596 (currently an initial preliminary draft), the first U.S. government cybersecurity framework profile for AI systems. IETF working groups published drafts for agent transaction tokens that distinguish between the AI agent performing an action and the human on whose behalf it operates, and for SCIM agent extensions to provision AI agent identities across organizational boundaries.

Cloud providers are building native capabilities as well. AWS Bedrock AgentCore now implements agent identities as workload identities that are environment-agnostic and support multiple authentication credentials simultaneously. Google Cloud’s Workload Identity Federation eliminates service account keys. Microsoft Entra describes just-in-time provisioning specifically for agentic workloads that require specialized persona management.

Vendors are tackling the problem from different angles. Aembit approaches it as a non-human IAM control plane, combining agent and user context into a single auditable credential called a blended identity so that actions can be tied to both the autonomous actor and the human who authorized it. Other vendors have consolidated machine identity security across secrets, certificates, workload identities and SSH keys, while dedicated workload identity platforms argue that each agent needs a unique, verifiable, cryptographically bound identity.

Across these approaches, a consistent pattern is taking shape: short-lived credentials issued just in time, runtime policy enforcement that evaluates context continuously, authentication based on cryptographically verifiable workload identity and audit trails that capture the full delegation chain. In some models, attestation helps prove workload identity while separate policy and authorization systems decide what that workload can access. That pattern shows up whether you look at IETF attestation guidance, cloud provider implementations, SPIFFE/SPIRE-style workload identity models or dedicated NHI platforms that apply policy-driven access decisions in real time per task.

Where to Start

Waiting for standards to finalize or for the market to consolidate is not a viable strategy when adversaries are already weaponizing AI agents at machine speed.

Start by protecting the workloads and data sources you already know are critical. Every organization has resources that need stronger access controls, from production databases and customer-facing APIs to CI/CD pipelines with deployment privileges and AI tools that teams may have adopted without security review. You don’t need a complete non-human identity inventory to begin hardening those access paths. Secure what you know first, then expand your coverage.

From there, build your visibility over time. A Silverfort analysis found that 94.3% of organizations lack full visibility into their service accounts. The NSA’s 2026 discovery guideline makes discovery of nonperson entities an explicit requirement. Full discovery takes time, so run it in parallel. Catalog every service account, API key, certificate and OAuth client across your cloud environments, CI/CD pipelines and third-party integrations while you are already protecting the assets you know matter most.

Then eliminate static credentials wherever possible. Every long-lived API key, hardcoded secret and persistent service account token is an attack surface. Aembit delivers short-term credentials just in time so workloads do not have to manage long-lived secrets directly. NIST IR 8596 recommends “implementing cryptographic authentication methods and continuous validation” for machine-to-machine communications.

Move authorization decisions closer to runtime. Static RBAC policies evaluated once at token issuance cannot prevent an agent with legitimate “read email” permissions from being manipulated through prompt injection into exfiltrating sensitive data. Context-aware, continuous policy evaluation is the direction every major framework is heading. For a deeper look at this shift, Aembit’s analysis of emerging identity imperatives argues that agentic systems need component-level, cryptographically verifiable identity.

Plan for accountability across delegation chains. When agents act on behalf of users and spawn subagents to complete tasks, your audit infrastructure needs to capture who authorized what, which agent acted and under whose authority. The Cloud Security Alliance warns that “autonomous actions by agents lack sufficient auditability,” and that gap becomes a compliance liability as agentic AI scales. At minimum, your logging should record the initiating user, the top-level agent, every subagent in the chain, the resources accessed and the scope of permissions granted at each handoff. Without that chain of custody, incident response teams cannot reconstruct what happened when an agent misbehaves.

Whether teams use cloud-native identity, SPIFFE/SPIRE-style workload identity or dedicated non-human IAM platforms, the immediate priorities remain the same: visibility, short-lived access, runtime decisions and stronger attribution for agent actions.

Aembit addresses these priorities through a non-human IAM platform that replaces static credentials with attestation-based, just-in-time access. Agents and workloads authenticate through verified identity and receive scoped, short-lived credentials for each interaction. Conditional access policies evaluate posture and context at the moment of every request, and a full audit trail captures the delegation chain from user to agent to resource. For MCP-based architectures, Aembit’s MCP Identity Gateway intercepts agent requests to MCP servers, performs token exchange using the agent’s verified blended identity and delivers the appropriate scoped credential to the resource without ever exposing it to the agent runtime.

Related Reading

You might also like

Most CISOs fear AI agent risks, but legacy IAM can’t govern autonomous systems. A new identity model built on attestation is emerging.
Secrets managers store credentials but can’t close the access gaps that multicloud workloads and AI agents create. Five alternatives can.
Pipeline breaches keep repeating because static credentials persist. Identity federation replaces stored secrets with runtime tokens.