Self-Assembling AI and the Security Gaps It Leaves Behind

Human brain and AI brain

Artificial intelligence (AI) agents are starting to do more than generate text. They perform actions – reading from databases, writing to internal syst ms, triggering webhooks, and updating tickets. Anthropic recently warned that fully AI “employees” may be only a year away, accelerating the need to rethink security for these new actors. 

What’s new is how they’re doing it: not by following hardcoded workflows, but by making decisions at runtime.

This new pattern is showing up everywhere, from internal support bots to automated research assistants to developer productivity tools. In some cases, LLMs write and execute SQL queries. In others, they’re connecting systems that weren’t designed with agentic AI use cases in mind.

What we’re seeing is the rise of self-assembling systems, where an LLM-powered agent interprets a goal and builds its own integration logic on the fly. And while this is incredibly powerful, it comes with serious challenges, especially around security, identity, and access.

Self-Assembly in AI: Code at Runtime

In traditional software, developers design integrations by wiring together APIs, contracts, and credentials. These systems typically rely on a service mesh or API gateway, along with a logic pipeline that has been reviewed, versioned, and tested.

In agentic AI, that wiring happens at runtime. 😱

This pattern, sometimes referred to as “self-assembly,” emerges when an LLM agent dynamically determines which tools or APIs to use to complete a task. There’s no predefined flow, no hardcoded sequence – just a prompt, a plan generated by the model, and a set of actions executed based on its own reasoning.

Here’s an example:

An LLM agent is instructed to “Find any new high-priority support tickets created in the last 48 hours for our top 10 customers and summarize them in an email to the on-call manager.”

To fulfill that task, the agent might:

  • Authenticate to Zendesk and run a filtered query.
  • Look up customer tier in Salesforce.
  • Compose a Markdown report.
  • Email the report via Google Workspace APIs.

This all happens without the developer manually wiring those systems together. Instead, the agent figures it out in real time, using the available tools and access it has.

And that’s the issue: What access does it have?

Each Integration Is an Access Point

Every action – fetching tickets, accessing CRM data, sending emails – requires an identity and permissions. The agent needs some form of credential to access each system. In a human-driven workflow, that might mean logging in and clicking “Authorize.” In an agent-driven one, that handshake needs to happen automatically.

Here’s what this looks like in practice:

  • Developers passing API keys into environment variables.
  • Secrets hardcoded in YAML files or scripts.
  • Access tokens shared across tools because “We’re just prototyping.”
  • Agents inheriting the identity of the dev environment or CI job that launched them.

These patterns have moved beyond edge cases and are now common in open source projects, internal automations, and commercial tools – many of them fragile or risky.

Systems Not Meant to Talk

One of the biggest shifts in this new architecture is the coordination among systems that weren’t designed to coordinate. In legacy environments, each system was protected by its own access model. GitHub has OAuth, Snowflake has signed JWTs, and Google Workspace has service accounts and scopes.

These systems were never built to be accessed in sequence by a semi-autonomous agent interpreting a prompt. And certainly not by an agent blending human and machine identity.

What do we mean by “blended identity”? It’s when an agent acts on behalf of a user – say, via OAuth delegation – while also performing system-level actions under its own authority. 

Advisory firm KuppingerCole notes that this is becoming increasingly common, and increasingly problematic, in AI-native systems. Imagine a bot that reads your calendar events using your account, then writes logs to a company-wide database using a service account token. If it writes to the wrong table, who’s accountable: the user, the developer, or the system?

Security in the Age of Dynamic Plans

Security practices today assume we know what software is going to do. We write rules, policies, and identity mappings based on a fixed understanding of:

  • What services talk to each other.
  • What actions are expected.
  • What credentials should be used.

"Everybody has a plan until they get punched in the face." - Mike Tyson

In a self-assembling agent architecture, that fixed understanding no longer exists. The model’s plan might change on each run. The order of operations might differ. The tools selected could vary based on context or prompt wording.

Yet we’re still granting broad, static access to these agents despite emerging approaches like authenticated delegation, which could allow agents to receive tightly scoped, auditable authority at runtime.

This opens up several technical security concerns:

1) Credential sprawl
Static secrets littered across tools, scripts, and agents are extremely difficult to manage and rotate.

2) Over-permissioning
Agents often receive credentials with broad access “just to make things work,” especially in early development.

3) Lack of scoping
Agents aren’t given context-specific, time-limited access aligned with the task they’re trying to complete.

4) No consistent logging
When an agent uses three different APIs with three different identities, auditing the flow becomes nearly impossible.

The Trouble with OAuth and Static IAM

Most modern identity models rely on either:

  • User-based OAuth flows, where someone authorizes access manually.
  • Predefined IAM roles, where a workload assumes a role with a set of permissions.

Both models struggle in agentic systems.

OAuth 2.1 with PKCE is optimal for delegated user access, but what about agents acting on their own behalf? Or hybrid scenarios where an agent partially represents a user?

Static IAM roles also fall short because they require pre-declaring what access a workload will need. But the point of agentic AI is that we often don’t know in advance what the agent will need.

Identity, Reframed

The rise of agentic systems requires us to think differently about security. We’re not just protecting APIs anymore – we’re protecting intent.

A few technical questions we’ve been hearing from teams exploring this space:

  • How do we give agents access without giving them the keys to everything?
  • How do we know what they’re doing at runtime?
  • Can we constrain them to operate only within a specific context or task?
  • What happens when agents start calling third-party tools outside of our infrastructure?

This goes beyond traditional IAM, pushing into new territory for system design, observability, and developer tooling. The Cloud Security Alliance has already called for adaptive access control frameworks that reflect the dynamic nature of agent identity and context.

Rethinking Integration Boundaries

Traditionally, integration happened through API contracts, SDKs, and middleware.

In self-assembling systems, those boundaries are defined by the model’s interpretation of a prompt. That interpretation can be hard to predict, more challenging to audit, and almost impossible to secure retroactively.

What we need now:

  • Context-aware identity models that adapt to the agent’s task.
  • Ephemeral access credentials tied to intent, not just static roles.
  • Traceable logs that link actions back to agent decisions.
  • Guardrails for agent behavior that don’t kill flexibility.

We’re not there yet (by a long shot). But we’re going to need to get there fast.

Agents Are Building the System. Let’s Make It Safe

Agentic AI is still early, but the architecture is forming fast. As developers, we’re moving from building static systems to designing the conditions in which agents act.

That shift demands a new approach to identity, access, and observability – one focused not on restriction, but on enabling AI systems to operate with safety, control, and accountability across unpredictable paths.

At Aembit, we’re actively exploring these challenges. We’d love to connect if you’re working on agentic AI, tools integration, or secure orchestration.

Visit us here.

Aembit logo

The Workload IAM Company

Manage Access, Not Secrets

Boost Productivity, Slash DevSecOps Time

No-Code, Centralized Access Management

You might also like

Securing non-human access should be easier – but federation is fragmented, manual, and brittle. We built a better way to do it across clouds.
This tutorial shows how to connect Claude to your macOS filesystem so it can read, write, and do useful things with your data.
As machine-to-machine communication eclipses human access, Aembit's secretless approach to non-human identity is gaining industry recognition.