Proof Key for Code Exchange (PKCE, pronounced “pixy”) is a security extension to the OAuth 2.0 authorization code flow that prevents authorization code interception attacks. Defined in RFC 7636 and released in September 2015, PKCE ensures that only the application that initiated an authorization request can exchange the resulting code for an access token.
How It Works
PKCE adds a cryptographic challenge to the standard OAuth 2.0 authorization code flow. The mechanism relies on three components: a code verifier, a code challenge, and a code challenge method.
The flow operates in four steps:
- Generate a code verifier: The client creates a cryptographically random string (43-128 characters) using characters A-Z, a-z, 0-9, and special characters (-, ., _, ~). This verifier should have at least 256 bits of entropy.
- Create a code challenge: The client transforms the verifier into a challenge. Using the recommended S256 method, the client computes the SHA-256 hash of the verifier and Base64URL-encodes the result. The plain method passes the verifier unchanged, but offers minimal security.
- Authorization request: The client sends the code_challenge and code_challenge_method parameters along with the standard authorization request. The authorization server stores this challenge and issues an authorization code.
- Token exchange: When exchanging the authorization code for tokens, the client includes the original code_verifier. The server recomputes the challenge and compares it against the stored value. Tokens are issued only if the values match.
This mechanism ensures that intercepting the authorization code alone is insufficient for an attacker. Without the original code verifier, which is never transmitted over the wire in the S256 method, the attacker cannot complete the token exchange.
Why This Matters for Modern Enterprises
OAuth 2.0’s authorization code flow was originally designed for web applications that could securely store client secrets. Public clients like mobile apps, single-page applications, and desktop software cannot safely store secrets because their code is accessible to end users or runs in untrusted environments.
Without PKCE, these public clients face a critical vulnerability: if an attacker intercepts the authorization code during the redirect (through custom URL scheme hijacking, malicious browser extensions, or operating system logging), they can exchange it for access tokens. PKCE closes this gap by binding the authorization request to the token request through a secret known only to the legitimate client.
The security landscape has evolved since PKCE’s introduction. OAuth 2.1, currently being finalized as an IETF working group draft, makes PKCE mandatory for all OAuth clients, including confidential clients that can store secrets. This reflects the industry consensus that PKCE provides defense-in-depth even when other security measures are in place.
Common Challenges With PKCE
- PKCE protects exchange, not identity: PKCE ensures the integrity of the token exchange but does not authenticate the client itself. Any application that can initiate a PKCE flow and capture the redirect can complete the exchange. For autonomous systems like AI agents or automated workloads, this creates a gap: PKCE prevents interception but cannot verify that the requesting entity is authorized to act.
- Nonhuman identity verification: When workloads or agentic AI systems use PKCE-protected flows, strong client authentication must come from infrastructure-asserted identity (cloud metadata, Kubernetes tokens, runtime attestation) rather than PKCE alone.
- S256 vs. plain method: The plain method offers minimal security improvement because the challenge equals the verifier. Always use S256 unless the client genuinely cannot perform SHA-256 hashing, which is rare in modern environments.
- Verifier entropy: Weak or predictable code verifiers undermine PKCE’s security model. Verifiers must be generated using cryptographically secure random number generators with sufficient entropy.
- Secure storage: The code verifier must be stored securely between the authorization request and the token exchange, then cleared immediately after use. Session storage vulnerabilities can expose verifiers to attackers.
How Aembit Helps
PKCE strengthens OAuth flows but leaves client identity verification to the implementer. For workloads and autonomous systems, this gap matters. Aembit addresses the identity layer that PKCE cannot, providing workload identity and access management that complements OAuth security extensions.
With Aembit, organizations can:
- Verify workload identity cryptographically through trusted runtime attestation before any OAuth flow begins, ensuring only authorized workloads can initiate access requests.
- Inject credentials dynamically so that workloads never handle static secrets or manage OAuth token lifecycles directly. Aembit Edge handles credential retrieval and injection transparently.
- Apply conditional access policies that evaluate workload posture, environment, and context before granting access, implementing zero-trust principles that go beyond what OAuth alone provides.
- Support OAuth-based credential providers with configurable PKCE settings, enabling secure access to services that require OAuth 2.0 Authorization Code flows.
For agentic AI and autonomous workloads operating without human oversight, PKCE and infrastructure-asserted identity together create a comprehensive security model. PKCE protects the token exchange while Aembit ensures the requesting entity has a verified, authorized identity.
FAQ
You Have Questions?
We Have Answers.
What is the difference between PKCE and a client secret?
A client secret is a static credential shared between the application and authorization server, used to authenticate the client during token exchanges. PKCE takes a different approach: instead of relying on a pre-shared secret, it generates a unique, cryptographically random code verifier for each authorization request. The client proves its identity by demonstrating knowledge of this verifier when exchanging the authorization code for tokens.
The critical distinction is that client secrets must be stored securely, which public clients (mobile apps, SPAs, desktop applications) cannot guarantee. PKCE eliminates this requirement by using dynamically generated values that exist only for the duration of a single authorization flow. This makes PKCE suitable for environments where storing long-lived secrets would create security risks. OAuth 2.1 now recommends using PKCE even for confidential clients that can store secrets, treating it as defense-in-depth rather than a replacement for client authentication.
Why should I use S256 instead of the plain code challenge method?
The S256 method hashes the code verifier using SHA-256 before sending it as the code challenge, while the plain method sends the verifier unchanged. This difference matters significantly for security. With S256, even if an attacker intercepts the code challenge during the authorization request, they cannot reverse the hash to obtain the original verifier needed for the token exchange. SHA-256 is a one-way function, making the intercepted challenge useless without the verifier.
The plain method offers protection only against attackers who can observe the authorization response (where the code is returned) but not the initial request. If an attacker can see both the request and response, they capture both the challenge and the code, gaining everything needed to complete the token exchange. RFC 7636 explicitly warns that plain should only be used when the client genuinely cannot perform SHA-256 hashing, a scenario that is rare in modern development environments. OAuth 2.1 takes this further by recommending that authorization servers reject the plain method entirely.
Does PKCE authenticate the client or just protect the authorization code?
PKCE protects the authorization code exchange but does not authenticate the client. This distinction is important. PKCE ensures that whoever started the authorization flow is the same entity completing it, preventing interception attacks where a malicious application hijacks the authorization code mid-flow. However, PKCE cannot verify that the entity initiating the flow is authorized to do so in the first place.
This limitation becomes particularly relevant for nonhuman identities and autonomous workloads. An AI agent or automated service using PKCE can protect its token exchanges, but nothing in the PKCE mechanism verifies the agent’s identity or authorization to act. For workload-to-workload communication, organizations need additional layers: infrastructure-asserted identity (through cloud metadata services, Kubernetes service accounts, or runtime attestation), conditional access policies, and centralized visibility into which workloads are accessing which resources. PKCE is one component of a secure authorization architecture, not a complete solution on its own.
Is PKCE required for machine-to-machine authentication?
PKCE was originally designed for the authorization code flow, which involves redirecting users through a browser to grant consent. Traditional machine-to-machine (M2M) authentication typically uses the client credentials flow, where a workload authenticates directly with client ID and secret to obtain tokens without user involvement. In this flow, PKCE doesn’t apply because there’s no authorization code to protect.
However, the line between user-facing and machine-to-machine flows is blurring. Agentic AI systems sometimes operate with delegated user authority (requiring authorization code flows with PKCE) and sometimes act on their own behalf (using client credentials or workload identity federation). The MCP Authorization Specification mandates OAuth 2.1 with PKCE for agents accessing protected resources, recognizing that autonomous systems may need to handle both patterns. For pure service-to-service communication without delegated user context, workload identity federation with short-lived tokens often provides stronger security than either client credentials or PKCE alone.