TL;DR
For the CISO: Your team may be evaluating SPIFFE/SPIRE as a path to zero trust for non-human identities. The standard is sound and vendor-neutral. The implementation, however, is a significant multi-year engineering project with compounding operational costs, and it still does not solve the SaaS access problem, legacy credential types, or AI agent identity. Before you sanction a build, read the “What the CISO Should Know” section near the end of this piece.
For the practitioner/engineer: SPIFFE is a well-designed open standard for cloud-based workload identity. SPIRE, its reference implementation, is real infrastructure, and like all real infrastructure, it demands a dedicated team to deploy, scale, maintain, and secure. Before you commit, map the full ecosystem you will need to run alongside it: agents, datastores, PKI, service mesh, proxy config, authorization policy, monitoring, and a management UI. None of that comes out of the box. Most teams underestimate it by a year and two engineers.
Introduction: The Standard Everyone Wants and the Project Nobody Finishes
There’s a quiet consensus forming in enterprise security: the old way of managing workload credentials is broken. API keys hardcoded in config files. Service accounts with permanent, overly broad access. Secrets rotating manually on whatever cadence the team remembers. Long-lived credentials sitting in vaults, waiting to be stolen. The question is: what’s next?
The answer that keeps coming up is SPIFFE, the Secure Production Identity Framework For Everyone. It’s a CNCF-graduated open standard, born from lessons learned at Google, Uber, Netflix, and Twitter. The idea is elegant: Instead of handing every workload a secret it has to protect, you give it a cryptographic identity tied to what it is and where it runs. Short-lived, automatically rotated, mathematically verifiable. No secret to steal.
SPIFFE is a genuinely good idea. The standard itself is well-designed, vendor-neutral, and increasingly supported across the industry. HashiCorp Vault now speaks SPIFFE. Red Hat is building it into OpenShift. Istio uses it natively. The major cloud providers support federation with SPIFFE identities. If you’re building a greenfield, Kubernetes-native, single-cloud microservices platform and you have a team of experienced platform engineers, SPIFFE/SPIRE could be right for you.
For most organizations, though, what happens in practice is something different. SPIFFE becomes a multi-year project that delivers partial coverage, leaves the hardest credential problems unsolved, and creates its own operational burden larger than the one it was meant to eliminate.
This isn’t an argument against SPIFFE. It’s an honest look at what it takes to ship it across real enterprise infrastructure: on-premises systems, legacy applications, multi-cloud SaaS integrations, CI/CD pipelines, and the emerging frontier of AI agents. The gap between the standard and the outcome is wider than most teams expect.
What SPIFFE and SPIRE Actually Are
Before getting into the pain, a brief grounding for readers who aren’t deep in the weeds.
SPIFFE (Secure Production Identity Framework For Everyone) is a specification. It defines how workloads should be identified, how those identities should be expressed as SPIFFE verifiable identity documents (SVIDs, typically X.509 certificates or JWTs), and how workloads can obtain and validate those identities. Think of it as a standard for digital passports for software. The passport format is defined. The CA that issues it is trusted by everyone in the domain. The passport expires every hour. No human ever has to touch it.
SPIRE (SPIFFE Runtime Environment) is the reference implementation of SPIFFE. It’s what you actually run. SPIRE has two core components: the SPIRE Server, which acts as the certificate authority and maintains a registry of which workloads should receive which identities; and SPIRE agents, lightweight daemons that run on every node and perform workload attestation, verifying that a workload is what it claims to be before issuing it an SVID.
The elegance is in the attestation model. Rather than a workload presenting a pre-shared secret, it presents evidence about where and how it runs: an AWS EC2 instance identity document, a Kubernetes service account token, a process-level attribute the kernel can verify. SPIRE compares this evidence against its registry and, if it matches, issues a short-lived certificate. The workload then uses that certificate for mutual TLS with other services. No password. No stored secret. No human in the loop.
That’s the pitch, and it’s a good one. Now let’s look at what it takes to deliver it.
The Ecosystem You’re Actually Signing Up For
One of the most significant gaps between the SPIFFE/SPIRE pitch and the SPIFFE/SPIRE reality is the scope of what you need to build and maintain alongside it. SPIRE is not a product you deploy and forget. It’s a platform that requires an ecosystem of supporting components, each with its own operational profile, version compatibility requirements, and community health.
Here’s what a production-grade SPIFFE/SPIRE deployment actually looks like.
SPIRE Server and Agents: The core. The server needs to run in a high-availability configuration, backed by a shared relational database (PostgreSQL or MySQL). Its CA private keys need to be protected by an HSM or cloud KMS. SPIRE agents need to be installed as DaemonSets on every Kubernetes node, or as system daemons on every VM or bare-metal host. That’s every node. Every environment. Every cloud.
SPIFFE-Helper: For workloads that can’t call the SPIFFE Workload API directly, which includes most legacy applications, many databases, and any third-party software, SPIFFE-helper runs as a sidecar, retrieves SVIDs on the workload’s behalf, and writes them to disk. Managing sidecar processes at scale adds operational overhead and resource consumption. As one engineering team put it, the approach “requires application-level awareness of SPIFFE, which can be intrusive and burdensome, especially for legacy workloads or third-party software.”
SPIFFE CSI Driver: For Kubernetes workloads, the CSI driver mounts the SPIRE agent’s Unix Domain Socket into pods, giving containers access to the Workload API without requiring host mounts (which carry their own security risks). This requires the CSI driver to be deployed and maintained across every cluster.
SPIRE Controller Manager: Because SPIRE only issues identities to workloads that have been explicitly registered, someone has to manage that registration. In Kubernetes, the SPIRE Controller Manager automates this by reconciling workload registrations from CRD definitions. But it needs to stay in sync with your actual workload deployments. If it falls out of sync, workloads fail to get identities and silently break.
Envoy and the Secret Discovery Service (SDS): If you want to extend SPIFFE identity to the transport layer via mTLS without modifying every application’s code, you need a proxy, typically Envoy. The SPIRE agent exposes an SDS API that Envoy can use to pull SVIDs and trust bundles directly, enabling mTLS at the network layer. This means configuring Envoy alongside every workload that needs mTLS, coordinating socket paths, managing version compatibility between SPIRE and Envoy, and integrating with whatever service mesh you’re running.
Service Mesh (Istio, Linkerd, Consul): Many teams encounter SPIFFE through a service mesh. Istio ships with a SPIFFE-compliant CA built in. But Istio’s native CA and an external SPIRE deployment don’t mix cleanly. You have to choose one root of trust. If you’ve already deployed Istio and want to use SPIRE as the CA instead, you’re looking at a careful migration, socket path reconfiguration, and a period where mTLS is temporarily disabled while you roll changes out. Indeed Engineering documented exactly this process in a 2024 post: they had to mount new socket paths across clusters one at a time, temporarily dropping mTLS mesh protection during the rollout.
OPA (Open Policy Agent): SPIFFE gives you identity. It does not give you authorization. To answer “should this workload be allowed to call that one?”, you need a policy engine. OPA is the most common choice, often deployed as an Envoy filter. That’s another component to deploy, configure, maintain, and write policy for, in Rego, a domain-specific language with its own learning curve.
Cert-manager: For managing certificates outside the SPIRE trust domain, such as public-facing TLS or internal PKI for services that can’t speak SPIFFE, cert-manager is the standard Kubernetes operator. It doesn’t integrate directly with SPIRE out of the box, but teams often run both to handle different certificate populations.
Tornjak: The management UI for SPIRE. Because SPIRE’s native tooling is CLI-first and lacks centralized visibility across multiple deployments, Tornjak adds a web UI and management plane. If you’re running SPIRE across multiple clusters or environments, Tornjak is essentially required for operational sanity. It’s a community project under the SPIFFE organization on GitHub, smaller in scope than core SPIRE. While it receives updates, it has a narrower contributor base, with the last major component releases dated to 2022 and 2023, and subsequent versions composed primarily of automated dependency bumps. This is an important pattern to watch: ecosystem tools like Tornjak may become security risks if their maintenance cadence can’t keep pace with dependency vulnerabilities. The CNCF has a formal archival process precisely because this happens. OpenEBS was archived in early 2024 due to lack of activity before being resubmitted months later. Security tooling that goes unmaintained doesn’t stay neutral; it accumulates exposure over time.
HSM or Cloud KMS: SPIRE’s CA private keys must be protected in hardware or a managed key service. Running SPIRE without this in production means your CA keys are on disk. For teams using the AWS KMS KeyManager plugin, the approach Indeed Engineering landed on after encountering issues with EBS/EFS persistent volumes, that’s another cloud dependency to manage and monitor.
Monitoring: SPIRE emits metrics. You need to scrape them. SPIRE Server exposes SVID issuance counts, attestation successes and failures, and bundle freshness. SPIRE Agent exposes rotation metrics and errors. Envoy exposes TLS handshake counts and mTLS success rates. If you don’t monitor all of this, you won’t know when federation is silently failing or when SVID rotation is backing up.
That’s the stack. And none of it comes pre-integrated. Every component requires configuration, version alignment, network access rules, and ongoing maintenance. Teams that have done it, including Uber, Indeed, and Macquarie Bank, have dedicated platform engineering resources and contributed fixes back to upstream SPIRE in the process. For organizations without that capacity, the stack becomes technical debt rather than technical advantage.
Where SPIFFE/SPIRE Actually Struggles
Even with the full ecosystem in place and a competent team running it, there are categories of problems SPIFFE/SPIRE simply isn’t designed to solve.
The SaaS Problem
SPIFFE’s trust model is cooperative: both the calling workload and the receiving workload need to participate. The workload receiving the connection needs a policy enforcement point, something that can examine the SVID and decide whether to allow or deny access.
For services inside your infrastructure, this is achievable. For third-party SaaS, including Snowflake, Salesforce, GitHub, Okta, your monitoring vendor, and your data pipeline, it’s not. Snowflake doesn’t run a SPIRE agent. Salesforce doesn’t validate SVIDs. These services speak OAuth, API keys, and sometimes Kerberos. SPIFFE has no path to them. You still need secrets for every SaaS integration, and those secrets still need to be stored, rotated, and audited. SPIFFE reduces internal credential sprawl; it doesn’t touch the SaaS boundary.
The Legacy Application Problem
Legacy applications, Java monoliths on VMs, .NET services on bare metal, COBOL systems interfacing through APIs, cannot call the SPIFFE Workload API. They weren’t written to. Adapting them requires either modifying their code (expensive, risky, often contractually impossible with vendor software) or wrapping them with a SPIFFE-aware proxy (adding operational complexity and a new failure point). SPIFFE-helper provides a middle path, but it requires process management infrastructure alongside every legacy workload.
The brutal truth: in most enterprises, the applications carrying the most sensitive credentials are precisely the ones least equipped to participate in SPIFFE.
The On-Premises Problem
SPIFFE was designed from the cloud-native context and shows it. Node attestation, the process by which a SPIRE agent proves the identity of the node it’s running on, works elegantly in cloud environments because cloud providers supply cryptographically signed instance metadata. An EC2 instance can present an AWS instance identity document. A GKE node can present a GCP attestation token. A bare-metal server in your data center cannot.
For on-premises workloads, SPIRE falls back to less elegant attestation mechanisms: TPM-based attestation (requires TPM hardware and complex configuration), or process-level selectors like user ID, group ID, and executable path. The latter are harder to spoof than shared secrets, but they’re also harder to manage at scale and more brittle when systems change.
The official SPIRE scaling documentation acknowledges this complexity directly: “when deploying a single SPIRE trust domain to span regions, platforms, and cloud provider environments, there is a level of complexity associated with managing a shared datastore across geographically dispersed locations.” That’s the documentation being diplomatic. In practice, teams with significant on-premises footprints often find SPIFFE/SPIRE impractical to extend there, leaving their most sensitive legacy infrastructure outside the identity model entirely.
The CI/CD Pipeline Problem
CI/CD pipelines are ephemeral, high-privilege, and increasingly targeted. Every pipeline run needs access to cloud credentials, container registries, signing keys, and deployment permissions. SPIFFE can theoretically help here, as SPIRE supports Kubernetes-based attestation for CI jobs running in pods. But GitHub Actions runners, GitLab CI agents, and cloud-hosted pipeline systems don’t run SPIRE agents by default, and configuring reliable attestation for ephemeral runners is non-trivial.
The OIDC-based federation that GitHub Actions and GitLab now offer with major cloud providers is often simpler for CI/CD than full SPIFFE deployment, but it lacks portability across providers and generates policy sprawl as teams and repos scale.
The Federation Problem
SPIFFE’s cross-domain federation model is conceptually sound: two SPIRE deployments exchange trust bundles, and workloads can authenticate across domain boundaries. In practice, federation is one of the most operationally intensive parts of SPIRE.
Adding or removing a trust domain requires a configuration change and restart for every participating SPIRE deployment. Trust bundles must be updated across all participating servers when roots rotate. And scaling SPIRE to multiple servers within a single trust domain creates a new problem: the number of JWT signing keys in the JWKS grows with the number of SPIRE servers, and third-party systems have undocumented limits on how many signing keys they’ll accept.
Indeed Engineering discovered this the hard way. AWS STS accepts approximately 100 signing keys; Confluent accepted only a handful. Neither documented the limit. The engineering team had to work with Confluent to raise a soft limit, while the AWS limit remains, and they filed a SPIRE issue that remains open. They also found that SPIRE’s credential composer plugin serializes timestamp fields as floats, causing AWS STS to reject JWTs because of invalid data types in the iat and exp claims. A fix was contributed by Indeed, but the episode illustrates what production SPIFFE/SPIRE work actually looks like: you’re often debugging at the intersection of three different systems’ undocumented behavior.
The “Deployment Takes How Long?” Problem
Defakto (formerly SPIRL), a company founded by the engineer who ran one of the world’s largest SPIFFE deployments, is unusually candid in their documentation: small-scale SPIRE deployments typically take 6 to 12 months, and more complex deployments can easily require 12 to 24 months to complete, figures that assume a core team of experts to build the solution. This isn’t a critique from a competitor; it’s the measured admission of people who built it and went commercial because of it. The fact that they raised $30.75M in Series B funding to wrap SPIFFE/SPIRE in a commercial product is itself the strongest argument against the “just use the open source” framing.
The Ecosystem Risk: Open Source Isn’t the Same as Maintained
There’s a category of risk that doesn’t show up in architecture diagrams: component lifecycle risk. Every piece of the SPIFFE/SPIRE ecosystem is open source, which means every piece depends on community contributions for security patches, compatibility updates, and feature development.
Core SPIRE is a CNCF Graduated project with active maintainers and corporate sponsors. But the broader ecosystem is less uniform.
Tornjak, the management UI that most teams need to operate SPIRE at scale, has a smaller contributor base and slower release cadence than core SPIRE. Its published packages on the SPIFFE GitHub organization show the last major component releases dated to 2022 and 2023, with subsequent versions composed primarily of dependency bumps from Dependabot. This isn’t a critique of the project’s contributors. It’s a structural reality of open source: niche components get less attention than core infrastructure. And in security tooling, a component that isn’t actively maintained may become a security risk if vulnerabilities in its dependencies go unpatched. The CNCF has a formal archival process precisely because this happens. OpenEBS was archived in early 2024 due to lack of activity, only to be revived months later. A security-critical management plane that goes quiet is a different kind of risk.
The pattern extends to integration-layer components. Plugin compatibility between SPIRE versions and Envoy versions changes. The SPIFFE CSI driver has its own release cycle. When you’re composing a platform from eight or nine independently-maintained components, keeping them mutually compatible is a job in itself.
Teams that build on this stack inherit the responsibility for monitoring every component’s security posture, testing upgrades, and patching vulnerabilities across the full dependency tree. That’s not a reason to avoid open source. It is a reason to honestly account for the ongoing engineering cost.
The Newest Frontier: AI Agents and the Identity Problem SPIFFE Wasn’t Designed For
Enterprise AI is moving fast, and it’s moving in a direction that creates identity problems SPIFFE/SPIRE fundamentally wasn’t designed to handle.
To understand why, it helps to distinguish three categories of AI agent deployment, each with a different identity risk profile.
Category 1: Commercial AI with Human-in-the-Loop
A security analyst uses Claude.ai or a similar commercial AI assistant to help with threat research. They invoke a tool integration that connects to an internal API. The human is present at every step, reviewing and approving actions before they execute. In this model, the AI is operating on behalf of a human whose identity is known and authenticated.
The identity problem here is subtle but real. When the AI’s tool call reaches your internal API, what identity does it carry? In most current deployments: the employee’s credentials, inherited wholesale. There’s no agent-specific identity, no scoping to the specific task, no audit trail that distinguishes “employee queried this API directly” from “employee’s AI assistant queried this API on their behalf.” The Cloud Security Alliance surveyed 285 security professionals in 2025 and found that only 23% of organizations have a formal, enterprise-wide strategy for agent identity management, and that teams are commonly sharing human credentials with agents because no purpose-built alternative existed at scale.
Category 2: Custom Agents with Human in the Loop
Your engineering team has built a custom LangChain or AutoGen pipeline that automates code review, summarizes security alerts, or drafts infrastructure changes for human approval. A human reviews before each significant action executes. The agent has access to internal APIs, cloud resources, and potentially sensitive data.
This is where the credential problem becomes acute. The agent needs API keys, OAuth tokens, and service account credentials, one per integration. Those credentials must be stored somewhere, rotated somehow, and revoked if the agent is compromised. SPIFFE can theoretically issue the agent a workload identity if the agent runs as a Kubernetes workload with a SPIRE agent on its node. But the downstream services the agent calls, your SaaS tools, your cloud services with their own auth mechanisms, your databases, still require non-SPIFFE credentials. The agent becomes a credential aggregation point, which is exactly the attack surface modern adversaries target.
A striking example: Replit’s coding agent deleted a production database holding data for over 1,200 companies and then generated 4,000 fake accounts to conceal it, because nothing separated the agent’s credentials from production write access. The agent’s identity was indistinguishable from a human developer’s.
Category 3: Fully Autonomous Agents
Event-driven pipelines. Scheduled automation. Self-orchestrating multi-agent systems where Agent A spawns Agent B to complete a subtask. No human at each decision point. These systems operate at machine speed, across multiple systems simultaneously, chaining tool calls that individually look benign but compose into high-privilege workflows.
This is where SPIFFE’s model breaks down most visibly. SPIFFE assigns identity based on where a workload runs and how it was deployed. It doesn’t have a concept of intent, delegation chain, or ephemeral task scope. An autonomous agent that has access to a credential can use it for any purpose its runtime allows. There’s no mechanism within SPIFFE to say “this identity is valid for this specific task during this specific session, and expires when the task completes.”
Solo.io’s field CTO wrote in mid-2025 that SPIFFE’s current identity model “may need to be revised to support AI agents,” noting that “if identities are more fine-grained, and even potentially generated on the fly, how can you possibly write authorization policies around this?” The question doesn’t have a clean answer within the SPIFFE framework today.
The Strata Identity and Cloud Security Alliance survey found that 55% of respondents cited sensitive data exposure as a top concern with AI agents, and that most security leaders simply don’t trust their IAM to manage agent identities. Eighteen percent expressed high confidence. Eighty-two percent did not.
The guidance from CISA and ISAC is now clear: Each AI agent needs a distinct identity, no shared service accounts, along with granular scope assignments and a mechanism to prevent privilege escalation without oversight. SPIFFE can be one input into that model. It cannot be the entire answer.
The Credential Type Gap
Even in environments where SPIFFE/SPIRE is deployed successfully, a significant category of credentials simply cannot be expressed as SVIDs. They exist outside the trust model and must be managed separately:
- OAuth client credentials for SaaS APIs (Salesforce, Slack, GitHub, Okta, Datadog)
- Database passwords for databases that don’t support mTLS or certificate-based auth
- API keys for external data providers and cloud services
- Kerberos tickets for on-premises Active Directory-integrated systems
- SSH keys for legacy systems and CI/CD access
- AWS/Azure/GCP access keys when workload identity federation isn’t available or configured
Every one of these credential types needs its own lifecycle management, rotation policy, storage mechanism, and audit trail, running in parallel with SPIFFE. For many teams, this parallel track represents more operational surface than SPIFFE was ever meant to eliminate.
What the CISO Should Know: Build vs. Buy
This section is written for security and technology leaders evaluating workload identity strategy.
The zero-trust imperative is real. Non-human identities, including service accounts, API keys, workload credentials, and AI agent identities, now outnumber human identities in most enterprises by a ratio of 45 to 1, and they’re growing faster than any team can govern manually. SPIFFE is the right direction. The question is whether your organization should build the implementation or buy it.
What “Building With SPIFFE/SPIRE” Actually Means
You are not buying a product. You are adopting an open standard and then assembling, configuring, securing, and maintaining a platform of 8 to 10 interdependent components. That platform requires at least one experienced platform engineer dedicated to SPIRE operations (and likely two or three at scale), a multi-year deployment timeline of six months for simple environments and 12 to 24 months for complex ones, ongoing engineering work to handle version upgrades, plugin compatibility, scaling events, and federation changes, and separate solutions for SaaS credential management, legacy applications, CI/CD pipelines, and AI agent identity, because SPIRE doesn’t solve these.
What It Won’t Cover
Even after a successful SPIRE deployment, you will have credential categories outside the SPIFFE trust model. Any SaaS your workloads call. Any database that doesn’t support certificate-based authentication. Any vendor software that can’t be modified. Your on-premises estate, if it’s significant. Your AI agents, if they operate across tool boundaries or outside Kubernetes.
The Signal in the Market
The engineer who ran one of the world’s largest SPIFFE deployments left that job to build a commercial company on top of SPIFFE because the open source implementation was too hard to operationalize at enterprise scale. That company, Defakto (formerly SPIRL), has raised $49M in venture funding. That’s the market telling you something.
The questions to ask your team before you sanction a SPIRE build
- Do we have two or more engineers with deep SPIRE expertise, or the budget to hire them?
- Have we mapped all the credential types we need to manage, not just workload-to-workload mTLS, but SaaS, databases, CI/CD, and AI agents?
- What is our on-premises footprint, and does SPIFFE/SPIRE cover it?
- How will we govern AI agent identities as agentic deployment scales?
- What’s the plan if a key ecosystem component (management UI, CSI driver, plugin) goes unmaintained?
- What’s the total cost of ownership over three years, including engineering time, not just licensing?
The Build vs. Buy Framework
Build with SPIRE if you have a predominantly Kubernetes-native, cloud-first environment, a dedicated platform engineering team, and a tolerance for a multi-year deployment horizon.
Consider a commercial solution if you have a mixed environment (cloud, on-prem, SaaS), significant legacy application footprint, AI agents in production or planned, a small security engineering team, or a deadline measured in quarters rather than years.
A Note on Competitive Positioning
The market has validated the problem. HashiCorp added native SPIFFE support to Vault Enterprise 2.0. Red Hat is packaging SPIRE into OpenShift. Teleport offers SPIRE as an alternative for CI/CD workloads. CyberArk and Venafi are investing in machine identity lifecycle management. Defakto raised $30.75M to commercialize SPIFFE.
Every one of these investments is a signal that the open-source SPIRE path is genuinely hard, and that the industry knows it.
The Bottom Line
SPIFFE is a good standard. It describes an identity model for workloads that is technically superior to the shared-secret status quo. The community that built it did so because they lived through the pain of managing workload credentials at massive scale and recognized there had to be a better way.
The gap is between the specification and the implementation. SPIRE is real infrastructure, not a product. Deploying it to production means assembling a platform of interdependent components, each with its own operational overhead, version lifecycle, and maintenance demands. It means solving problems the documentation doesn’t warn you about until you hit them in production, like federation failing because AWS STS has an undocumented signing key limit, or Istio socket paths conflicting with your existing SPIRE configuration. It means accepting that SPIRE covers workload-to-workload mTLS in the environments you control, and that it leaves the SaaS boundary, the legacy application problem, the on-premises gap, and the AI agent identity challenge largely untouched.
For most enterprise teams, the honest answer isn’t “don’t adopt SPIFFE.” It’s “don’t adopt SPIRE as a DIY project and call it done.”
Already Running SPIRE? Let’s Talk.
If you’ve already invested in SPIRE, or you’re mid-deployment and hitting the walls described in this piece, Aembit can meet you where you are. We integrate with existing SPIRE implementations, extend coverage to the credential types and environments SPIRE doesn’t reach, and provide a migration path that doesn’t require you to abandon what you’ve already built.
The goal isn’t to replace your investment. It’s to complete it, covering SaaS access, legacy credential types, AI agent identity, on-premises environments, and CI/CD pipelines with a unified identity and access management layer that works alongside SPIRE today and gives you a path forward as the landscape evolves.
Talk to Aembit about integrating with your SPIRE deployment
Further reading:
- Uber Engineering: Our Journey Adopting SPIFFE/SPIRE at Scale
- Indeed Engineering Blog: Secure Workload Identity with SPIRE and OIDC for Kubernetes and Istio
- Macquarie Bank: Embracing Modern Identity with SPIFFE and HashiCorp Vault
- Defakto: Simplifying SPIFFE, Accessible Workload Identity with SPIRL
- Cloud Security Alliance: Securing Autonomous AI Agents (survey report)
- Cloud Security Alliance: Agentic AI Identity and Access Management
- Aembit: Aembit vs. SPIFFE/SPIRE
- Aembit SPIFFE Credential Provider press release