When Identifiers Aren’t Identities: Security Lessons from the Base44 AI Vibe Coding Flaw

A person vibe codes

Wiz Research recently disclosed a vulnerability in Base44, an AI-powered “vibe coding” application development platform now owned by Wix. The issue allowed outsiders to join private applications by presenting an application ID – a value that was plainly visible in URLs and public configuration files. 

Once in possession of that ID, a user could move through the normal registration process, confirm via email (to an address they controlled), and gain access, even if the application was configured for single sign-on (SSO). The workflow treated the application ID as a sufficient step toward trust.

This oversight effectively placed every tenant at risk in the shared infrastructure model – a single weak control in the core authentication process became a potential weakness for all customers

What made this so troubling wasn’t technical complexity, but just how easy it was to pull off and how widely it could apply.

It mirrors a common pattern in non-human access: letting a static value stand in for identity. That value might be an application ID, an API key, or a service account name. 

And as more organizations adopt LLM-powered tools and AI-generated applications, they’re increasing their exposure to similar architectural blind spots – where systems built for speed and accessibility skip over basic identity controls.

It may be convenient for “routing” inputs, but it is not a reliable basis for trust.

Human identity systems have moved beyond relying on a single identifier. A username alone is not enough — access typically requires multiple factors, such as passwords, enterprise-issued identity assertions, and behavioral signals. In non-human access, however, similar discipline is often lacking.

Workloads routinely gain access by presenting a token, key, or label that is accepted without verifying the context, origin, or legitimacy of the requester. The Base44 case is a reminder that these same principles must apply to non-human and AI identities.

Here are five practical measures that can help prevent similar exposures.

Five Ways to Prevent Identifier-Based Access Vulnerabilities

1) Treat IDs as addresses, not credentials.

An application ID, tenant ID, or project ID should only determine where a request is routed. It should never advance authentication on its own. If possession of an identifier confers trust, the process needs a separate, verifiable proof of identity.

2) Bind identity to runtime.

A workload’s identity must be tied to where and how it is running. A token reused from a different host, pipeline, or runtime image should fail policy evaluation. This deters copied values from being used outside their intended environment.

3) Decide at the moment of access.

Make the access decision each time a request is made. Confirm identity, purpose, and environmental conditions in real time. Keep tokens short-lived and audience-specific so their utility is narrow and time-bound.

4) Close the side doors that bypass SSO.

If an application is private, enrollment should be invite-only and anchored to enterprise identity. Self-registration, password reset, and one-time passcode flows should be disabled or brought under the same requirements as SSO.

5) Require posture and context checks

Identity is only part of the equation. Evaluate the workload’s configuration, patch level, and compliance posture before granting access. A valid identity operating from an untrusted or non-compliant environment should not be treated as safe.

Why This Matters for Non-Human and Agentic AI Access

Across user-based identity systems, these practices are considered basic hygiene. In non-human access, however, they are still inconsistently applied.

Workloads often operate with credentials that live for months, or with identifiers that are accepted without any verification. This creates conditions in which an attacker does not need to breach a vault or defeat authentication – only to reuse something the system already trusts.

As the number of applications built through LLM-based tools and “vibe coding” platforms continues to grow, the volume of non-human identities — and the surface area for misconfigured access — expands with it.

Workload IAM is built to prevent this. Aembit removes the reliance on static trust by verifying workloads in real time, eliminating embedded secrets, and granting access only when identity and posture align with policy. In that model, an application ID determines routing signals but never substitutes for identity.

To learn more, visit aembit.io.

You might also like

In a single day, a developer sets out to build working AI agents – and uncovers the friction, fragility, and surprising overhead behind the automation dream.
If your workloads could talk, they’d probably ask for better IAM.
You can monitor traffic all day, but if you don’t control what's allowed to send it, you're already behind.