Want to secure workload access to LLMs like ChatGPT? Join Our Webinar | 1 pm. PT on June 18

How Aembit Is Sparking a Machine Identity Revolution With Workload IAM

Watch our on-demand introductory webinar to learn how Aembit provides secure and efficient identity-based workload access across various environments.
paul-headshot

Paul Dahn

Aembit logo
Apurva Dave picture

Apurva Davé

Aembit logo

How Aembit Is Sparking a Machine Identity Revolution With Workload IAM

Watch our on-demand introductory webinar to learn how Aembit provides secure and efficient identity-based workload access across various environments.
paul-headshot

Paul Dahn

Aembit logo
Apurva Dave picture

Apurva Davé

Aembit logo

About This Webinar

Fed up with hardcoded API keys in your applications? Feeling the strain of cloud IAMs limited to just one environment? Frustrated with secrets managers?

Join Aembit experts as we showcase the Aembit Workload IAM Platform, a single console to centrally manage and authorize workload access across your  applications, services, and other resources – no matter where they live.

In this immersive product demo, you’ll discover:

  • The Future of IAM: Move beyond DIY, vaults, and cloud-specific solutions to embrace secretless workload IAM.
  • Policy-Based Access Control: Apply conditional access policy control that paves the way for Zero Trust for workloads.
  • No-Code Implementation: Reduce the burden on developers and accelerate deployment times with no-code auth.
  • Cross-Environment Compatibility: Achieve seamless integration across various clouds, SaaS services, legacy apps…even third-party APIs.
  • Plus much more!

Join Aembit Solutions Architect Paul Dahn, and our strategy guru, Apurva Davé, to experience the product up close and personal.

[contact-form-7 id="d5e8a98" title="Live Demo: How Aembit Is Sparking a Machine Identity Revolution With Workload IAM"]

By supplying my contact information, I authorize Aembit to contact me with personalized marketing communications about our products and services. You can unsubscribe at any time. See our Terms and Privacy Policy for more details.

Transcript

We’re good to go. So my name is Apurva Dave. I run go to market. This is Paul. Paul runs solutions architecture for our customers, and, we’re gonna start the way every webinar should start: Which is with a joke.

Okay, Paul. Since you’re the only one I can talk to you directly, what did one workload ask the owner?

I I don’t know if we’re what did one workload ask the other? By the way, I’m very sad this the first and only time, this is the the, like, we’re gonna do this, you know, for the first time, if you will. But one workload asked the other. Can you keep a secret?

That’s bad. Yeah. Thank you. Thank you. Move on. Thank you. Okay. Alright.

Okay. So let’s talk about, you know, you’re here in general because you wanted to learn about workload identity and access management.

So let’s talk about this access component.

Access today.

Yeah. It could be better. Right? This is the reality of what organizations are seeing in their environment.

When one workload needs to talk to another, a workload could be an application, it could be custom code, some prepackaged app you buy and run, It could be serverless functions, scripts, whatever. Right? They need to access a number of other services that, that they they need to get their work done, whether it’s to access logic or access data, and all of those aren’t in your control. Right? They’re not all in the same AWS that could be spread across multiple clouds, SAS environments, so on, and so forth.

So you you tend to see in order to make that happen, a lot of hard coded secrets. East sprawl spread throughout, this infrastructure. Oftentimes, we hate to say it reused credentials.

And that’s on the service, the side of workloads trying to create access. There’s a kind of sprawl on the other side of that connection too, which is service accounts sprawl. In many cases, we see a massive amount of service across all because this environment is not tightly managed. Because the environment, is not tightly managed, from this access perspective, devs will also have access to machine secret after all, if they’re coding up a serverless function or an app, they might be required to get a secret from somewhere else and put it in these machines.

They tend to be long lived secrets as well that, kind of live out there for a long time. The end result of this is the the nature of applications today, connecting to other apps API SaaS services creates a growing attack surface for your credential and identity and biometric. And most people are dealing with it with what I would describe as a mix of hash solutions.

In some cases, your devs may use a vault. In some cases, they might use cloud IAM.

You may be trying to build, open source spiffy infrastructure sure or you may have to do it yourself with a custom implementations.

I bet you, for most organizations, they’re actually doing a mix of these things. And that in and of itself creates a huge problem as people try to create a unified and consistent way of dealing with this infrastructure, issue and access and management issue.

Okay. So, this is the problem we wanna solve naturally. That’s why I’m talking about it.

And Aembit’s approach is that you we think you should manage access between workloads, not have to manage these low level secrets and credentials. And what does that mean for us? We’re building a centralized identity based view of workload to workload access.

So what that means is we want to use identities that connect to workload access policies across all these environments I described, right, not just your new shiny cloud and Kubernetes environments, but literally everything you have to deal with. We also wanna offload the development of off end from your develop from your developers. Right? They They have to code this into their apps today. Sometimes that’s harder than others. In all cases, it’s work that’s not furthering your business.

Along the way, we think we can help you move to short lived credentials and eliminate the need for devs to have access to any kinds of these credentials as well. And deliver you something that can give you visibility into all of this access.

And by doing this, we think we can build the identity control plane for workload And that gives you the ability to manage, enforce, and audit access from one workload to another across your distributed and heterogeneous environment.

Okay. So that’s the pain, and that’s kinda what we’re doing. Right? If you want, if you want the I don’t know how long that was. Three minutes, as the three minute pitch on on kinda what we’re doing. Now I wanna talk a little bit about the how we do it before we shift over to Paul to do a demo and show you how it actually works and how our console operates.

So, I’ll spend a couple minutes here. And again, you know, by the way, when we do just like one on one conversations, this is a place where I think we have a lot of questions from folks. So feel free to, again, pop in any questions you have, into the Q and A area that’s under the three, shapes in the lower right corner.

So the Aembit architecture, looks a whole lot like an architecture for a different problem that you’ve probably, worked to solve already, which is user identity and access management. And so for user identity and access management. Imagine an Octa or a ping or something like that. You’ve got users on the left hand side of the picture and apps they’re trying to talk to on the right with some identity control playton, above or out, you know, out of band, which controls access rights. And we’re gonna do a similar thing, but workloads on the left and services or other workloads on the right.

The way we approach this problem is through a process we call two sided federation.

And what that means is we develop trust relationships with your workload operating environments on the right and your services so, pardon me, your workload and operating systems on the left and your services on the right, that allow us to broker access between workloads.

The way we do that is through a, technical process and innovation, we call attestation.

And so as a workload requests access to a service, We actually use metadata from the environment the workload is in to cryptographically attest to the identity of that workload.

So we’re no longer depending on the workload showing up and saying, yeah, I I’m who I I say AM. I’ve got an credential. I’m good. Right? We actually don’t know. If you have an access credential, does that mean you should have access? We don’t actually know By shifting it to attestation, we identify the application and then use policies that you build into Aembit cloud to confirm that that workload has access.

Once you’ve confirmed that, we use our trust relationship with the service itself to issue an access credential on behalf of that service.

Now what’s really interesting here is we can issue the shortest lived credential that that service understands.

Thereby enabling you to move to shorter lived credentials, even if the workload didn’t expect that in the first place.

Finally, we log and we log all of this in an identity based manner. So now you’ve got logs and visibility based on identity instead of access between IP addresses alone. Right? All of this happens with Aembit Cloud being out of the data path.

So all data between your workload and your services flows between those two. We don’t see that data. We only see access requests. That’s really powerful.

One last component here, which I didn’t mention, You’ll see, in this picture, we’ve got Aembit Edge. And Aembit Edge is, broadly speaking, it’s a, you know, it’s a concept. And it’s deployed or implemented either as a sidecar, in your Kubernetes, clusters for your applications, it can be implemented as an agent for VMs or, we have code which operates as an SDK that could be integrated directly into your workload. All of that is what enables you to, have your workloads talk to Aembit Cloud. Now in the case of the sidecar or agents, we describe this as a no code off methodology.

The agent, in AM and Edge acts as a transparent proxy. Which means your applications don’t need to be configured to know about Aembit, and you don’t have to change the way they behave in any way shape or form. In order to have them work with Aembit. We take care of all of that. All they do is normally request data they would the way they would, in any other scenario.

Obviously, if you choose to go the SDK method, then you you do need to code access from your workload into Aembit cloud, but we give you the option depending on what works best for you. For example, we have a customer who’s deploying Aembit with serverless workloads. And in that case, the, the SDK works better for them. We have another customer who’s using custom prepackaged applications, and they don’t have, any way to change, you know, anything going on inside that app the Aembit Edge sidecar works great for them because, the it works transparently and the app doesn’t even need to know it exists.

Alright.

Let me let me drive forward one more. Let me give you a an example here, and then, we’ll shift over to a demo. This is a customer of ours who was accessing a very sensitive data through, through a set of workloads going to Snowflake. And these workloads were split.

Some of these were custom applications, and others were a pre packaged application. So this was the customer I was referring to the previous slide who also had pre pre packaged applications.

Now for most of these workloads, they were configured to access Snowflake via long lived secrets. And a couple things, one that meant that the workloads themselves would oftentimes have either admin credentials or a long lived API key embedded into the application.

They could access Snowflake, but there was no verification of application identity. So there was no attestation to say the app is who it says it is.

And any work to, rotate secrets would be entirely manual. You need to rotate them on the snowflake side, and then you’d need to either go rotate them on the workload side or even worse ask your developers to rotate them on, the workload side. All of this led to an untenable system in the sense that, you know, as more and more sensitive data was going into Snowflake, they wanted to ensure that only the workloads that they had defined were accessing that data and even accessing it at the right time, and it had no way to do that. So enter Aembit, that picture on the right probably looks pretty familiar to you. And we were able to entirely automate the Secret’s life cycle. We removed admin access to the credentials, because we were able to inject real time credentials on behalf of, the workloads when they were requested.

What’s interesting here too is because Aembit pardon me because Snowflake supported short lived credentials, we could inject short lived credentials into these workload access requests even for those prepackaged workloads that were only configured to use long lived secrets. They didn’t need to know that we were injecting short lived requests but we dramatically improved the security of accessing Snowflake on behalf of those workloads. Final thing, which is, pretty interesting too, we’re able to support conditional access policies. So it’s not only that you have the right you you are the right person or right workload and you have policy based access, but we could apply different conditions. For example, Are you being proactively managed by crowd strike or whiz whiz? Is this a time frame that you’re allowed to access data in? All of these kinds of conditional policies dramatically improve both your security and your ability to drive compliance.

We’ve talked through a number of different use cases here, but I’d say the high level is that people use us to protect access between workloads to their most sensitive data and infrastructure. Whether those are data warehouses and data lakes, SaaS services that aren’t fundamentally under your control or critical infrastructure that reaches out and touches other components of your critical infrastructure, we can apply identity access management and a zero trust model for your workload access.

I’m gonna skip this for now, given time. And let’s just let’s start getting ready for a demo here.

There’s a couple things I want you to see in this demo. First of all, we’re gonna show you in our console how you do policy creation and how you can drive access based on identity instead of secrets. We’ll show you how credentialing works through these trust relationships, trust providers, and credential providers. And finally, we’ll give you a glimpse at conditional access where you can get access based on, additional conditions. And I think in this case, we’ll talk about your security posture using one of our partners.

Paul, are you warmed up? Are you ready?

I am. I am. Okay. Great. I’m gonna stop sharing. As we switch over to Paul, this is a great time for additional questions.

So if you have any questions, feel free to drop them in the q and a window and we’ll we’ll pick them up either during the demo or right after. Over to you, Paul.

Great. Thank you. Thank you. Perfect.

I’m just gonna start off with the heart of our platform, which is the access policies.

The access policies are really what determines what client workloads are able to access what server workloads. And so this is a policy engine where you can determine what has access to what. And looking at one of these lines, you can see the five components of our policy.

The first is the client workload. You can think of this like a username and user environment.

This is there so that we can identify that workload, and it’s not meant to be secret.

The next is the trust provider This is where we get to that attestation that a poor girl was talking about. This is us federating with the environment that the workload runs in. So that we can get a cryptographically strong identity for that workload and know it is who we think it is.

On the right side here, we have the server workload, which is just the service that that client workload is trying to access, and then the credential provider, which is how we access that, that workload. What credential do we use to actually access the workload?

Here in the middle, you might see this little icon. It looks like a a traffic light, that is a conditional access policy. And so this is showing that this policy is not just does that client workload have access to that server workload, but then also does it have access and is it secured in an appropriate method?

So now I’m just gonna dig into each of these a little bit. I’ll start with a client workload. So this is how we identify those workloads.

It can be things like a host name for a VM.

If it’s a VM based client or you know, host name with a process name so that only that single workload on the host is able to access the resources. They if it’s not coming from that specific process, we won’t even evaluate it.

Likewise, if we’re looking at things like saying, a Kubernetes deployment, we can look at the prod prefix name. And that’ll identify all the various pods within that deployment, as, as one group of applications. So you can give them the same rights across each of those pods. And of course, as it scales, those new pods as they come online will get those same rights.

There’s a variety of other things that we can do, things like the Ammit client ID, which is our edge component. And so then we could say, hey, anything within this Kubernetes cluster we’re identifying as one group or a source IP address.

There’s a variety of things that we can use in that.

Next, looking at the trust providers, this is where we get to that cryptographic identity. So Of course, with the cloud service providers like Azure and AWS, we can use their metadata service and instance identity documents or tokens signed by them, to identify that client workload, whether it’s a Kubernetes workload or client less workload or serverless workload or a DM running within those environments.

We can also do the same thing with with Kubernetes environments.

One example is if I wanted to, I could take the e pair from the Kubernetes environment and federate with that Kubernetes cluster by just uploading the public key into Aembit. That way, we can validate the identity documents of those terms account tokens, generated by the cluster, and we can validate that they actually were created by the cluster.

Alternatively, if you’re using something like an OIDC server, we can just connect the OIDC server and pull back that public key automatically.

Now that we know the client workload and cryptographically validated its identity, we now look at where it’s connecting to. So that’s looking at the server workload. And the server workloads are actually pretty straightforward. If I look at, say, snowflake, It’s really just, you know, what is the host it’s connecting to, what protocols it using in this case because it’s the snowflake SDK I’m using snowflakes proprietary protocol.

And then, you know, what, courts is it using to connect? And this, it’s coming into Aembit’s proxy on four four three. We we are decrypting that traffic, injecting the credential, re encrypting it, and then sending it out. And then of course, how are we authenticating?

We’re using, a snowflake job.

So this is always paired with the credential providers, which is how do we create the credential that that service can use.

And if I look here, my Snowflake Jot, that I’m just defining the things within, snowflake that will be used for authentication. So The first is of course my account ID, and then the second, the username that I’m, connecting with. The and it, system creates an RSA key pair.

And we show the user the actual snowflake command that they would need to, allow us to authenticate using Jots signed by the private key. One thing that you should note here is the add to never see that private key. It only has access to the public key. So there’s no way that that admin could ever compromise the private key. That’s one key thing that we’re doing here, making sure that those credentials stay secret even from the admins.

And then finally, we have those access conditions.

This is where through our integrations, we’ve connected to CrowdStrike with and with and then created access conditions based on that. So if say I was looking at, with, for my Kubernetes cluster.

I, of course, have my endpoint But then I’m looking at things like, hey, is it currently connected?

Is that cluster connected? And when was the last time I saw it? Making sure that It’s currently being protected by wolves.

And so all that comes together to the access policy where it can be evaluated whenever a client workload wants to access the server where we identify the client workload, identify what it’s connecting to if that workload is still secure and then automatically provide access.

Now the final component to close the loop on this is, of course, reporting on all of that. So we log every time a client workload tries to access a service. And we, report it to you in human readable form so I can actually look at what that client workload is.

Production application, the service it’s trying to access in this case, Kevan Thredith. And then I can actually look at the details around that. So I can look actually into all the information about what was happening within that. If I looked here, I could see you know, a request to redis, a response coming back.

If I look within that request, I can see, exactly the IP address that it came from. Information about the application pool, the event, and then what we did to that connection as well as some, metadata about it. One thing you should note here is we do not collect the data within the stream. We only collect metadata.

So your private corporate information stays within your environment, and we never get to see it.

Of course, along with those workload logs, We also have audit logs that show everything that the admin they’re doing within the system. And of course, all of this doesn’t just stay in our system. We can send it off, but you can analyze it within your SIM.

One, the quick thing that I also want to mention is our Edge components. We wanna make it very easy for you to deploy, whether it’s a client on VM or within your Kubernetes cluster where we have our own home chart and you can easily deploy it and then annotate your, your pods so that you can add our proxy directly in them. So it’s really a very quick deployment, you can add your existing process.

Portva, I’ll now hand it back to you.

Nice job, Paul, as always.

Paul’s a pleasure to work with. Okay. So I’ll bring the the slide back up here. And, I know we’re at time. So if, if you have to run and you do have questions, you should feel free to reach out to us. You can reach out to be directly at apurva at aembit dot io. If you do have a few moments, we’re happy to take your questions.

I also just wanted to flash up here on how you get started.

So we, we our product is self-service. It’s really designed to be self-service And we have a free tier that matches that. So up to ten production scale workloads, can can run on our free tier. And we’re happy to actually help you get that up and go as well.

And then we scale, pretty efficiently. Right? So we wanna make it easy for you to get this rolling, and you can get started on your own, or we’ll we’re happy to help you. And as as kind of, you know, kind of a way to get rolling here.

We know this is new technology, and we wanna make it as simple and easy for you as possible.

Paul, are you ready for some questions?

I am.

Okay. So, So the first question is, you know, the the question from the audience is, do you have to do a custom integration for every service that I’d like to work with, or do you have some kind of generic integration that’ll work for me?

Absolutely. Really when thinking about integrations, you should be thinking about the type of authentication. And so there’s a few standards based authentications that we support. That are used widely across the services that you’d be using. And so we don’t need to do custom integration for each individual service.

It’s it’s these broad things such as o auth that we support.

Right. And, you know, we’re in I’ll just reinforce if you’re ever having issues getting something set up, we’re really happy to help you, you know, do that. Or if there’s, like, an advanced authentication method you’d like support. That’s something we could work with you on. But generally, we’re designed to broadly scale to all of those integrations you’re going to need.

Next question is, you know, the question is, you can I can see how this works across, the clouds and Kubernetes? What about on premise environments, and how do you do attestation?

I’ll take that one. So, generally, what we do is we leverage metadata from the environment in order to do attestation. So in on premise environments, there are usually various forms of metadata we can derive to do attestation. Those may consist of your, VMware or or, you know, virtual machine platform they might consist if you’re using Kubernetes on prem, it will work similarly, in their extended methods we can use as well, which we’re happy to get into you. Into detail with you about, definitely designed to work on premise as well as in the cloud. And it’s that’s very important.

Next question is, you know, a little bit around, like, how does this relate to spiffy?

Paul, would you like to take this question or would you like me to take this question?

When are you doing it? Sure. I’d be happy to. I will we’re we’re we’ve been talking a lot about pretty cool. You know, it’s it’s kinda like a cool open source project, around identities.

There’s a couple things here. So first of all, spiffy is really awesome. If you’re using spiffy today at scale, first of all, kudos to you, that’s still significant about engineering work, But it’s another form of identity, and we can absolutely use this identity to manage access, to whatever other services you need. That’s really valuable. That consistent source of identity is fantastic.

One thing to note, however, is that spiffy does result in a stored secret Right? It’s it ends up in a stored identity secret. That does have a form of risk. We try to avoid that by using attestation against native forms of identity, like, service account tokens and other forms of metadata that inherently exist surrounding your application, not embedded in your application.

The other component to think about is how broadly do you need to connect to your environment For example, if you’re if you’re operating solely within a cluster, spiffy would could be a great solution for you. But if you’re connecting to, other clusters, other clouds. If you’re connecting to SaaS services or partners API, think about your business partners in services that they may run. In those far flung infrastructure resources, you don’t have control of how to implement identity. And you’d probably be pretty lucky if those partners had implemented, a fifty infrastructure based on Spire or something similar that you could actually use and access and federate it. So in those cases, it’s really hard to apply a consistent IAM model, whereas Aembit is designed to allow you to do that.

We’re at five minutes passed, so I’m gonna call it here. There’s a couple more good questions coming in. My promise to you is that we’ll get back to you individually on those questions. But in the meantime, I just wanted to thank you for joining us. And I hope you either sign up for our app or you just reach out to us, and we’d be happy to talk through your infrastructure, your environment, your questions, get you started on your journey for workload IAM.

Thanks again for joining us. Thanks to my co host, Paul, for doing an awesome job. And happy holidays. We’ll see you again soon.

Related content