Enterprise security teams have long operated with a basic expectation: When something happens in a system, you can trace it back to a known identity. That no longer holds.
AI agents are now part of that equation, operating inside production environments while relying on identity and access models built for users and service accounts. The result is a growing gap between how access is granted and how it is actually used.
In practice, agents are rarely assigned a distinct identity. They operate under shared service accounts, workload identities, or human credentials. This allows teams to move quickly, but it weakens the link between identity and action. When an action is taken, the system may be able to confirm that it was authorized, but in many cases it is difficult to determine whether it was initiated by a person or an agent without additional context.
Today we’re publishing new research with the Cloud Security Alliance on how enterprises are handling identity and access for AI agents. Based on a survey of more than 200 organizations, the findings point to a clear gap between how access is granted and how it is actually used once agents are introduced.
Sixty-eight percent of organizations report that they cannot clearly distinguish between actions taken by AI agents and those taken by humans. This is not limited to logging. It affects investigation, accountability, and the ability to explain system behavior when something needs to be reviewed.
Access follows the same pattern. Agents inherit permissions from the identities they use, rather than receiving access that is defined specifically for them. More than half of organizations report that this occurs at least some of the time, and nearly three-quarters indicate that agents are granted more access than required. These permissions are valid within the system, but they were not created with this type of actor in mind.
Control, as a result, tends to be applied after access has already been granted. Teams monitor behavior, introduce manual approval steps, and intervene when something appears out of place. When necessary, they revoke tokens, disable identities, or terminate the environment in which the agent is running. Nearly half of organizations report disabling the identity used by an agent or revoking active session tokens to limit access. These actions stop activity, but they do not change how access is determined.
This approach becomes more difficult to maintain as agents take on more responsibility. The number of interactions increases, and the effort required to understand what happened grows with it. The system continues to function, but the relationship between identity, access, and action becomes harder to reason about.
Across the teams we work with, the direction is consistent. There is a desired move toward separating agent identity from human identity, defining access more precisely at the point of request, and improving visibility into agent behavior once access is in place. These are established identity and access concerns, now applied to a different type of actor.
The report examines these patterns in detail, based on responses from organizations already working through them.