Renee Guttmann has led security at some of the world’s most recognized brands, including Coca-Cola, Royal Caribbean, Time Warner, and Campbell Soup Company. Over a career that spans multiple decades, she’s built and rebuilt cybersecurity programs through every major industry turning point.
What makes Renee stand out is her ability to see patterns across those shifts. She’s known for turning complex technical challenges and risk realities into clear business conversations, and for mentoring the next generation of security leaders to think beyond compliance.
Her approach has always been grounded in one idea: security exists to keep business moving safely, not to slow it down, a principle that feels especially relevant in the age of agentic AI.
She doesn’t see this next chapter as a threat to contain, but as a chance to design with more intention – to think ahead, build in safety from the start, and make sure someone’s thinking about what could go wrong before it does.
With that perspective, Guttmann recently joined Aembit as an adviser. We sat down with her to ask how far the industry has come, what lessons still hold true, and how security can guide the next phase of AI adoption.
5 Questions for Renee Guttmann, Adviser to Aembit
Throughout your impressive career, you’ve seen the evolution of cybersecurity, from firewalls to Zero Trust and beyond. How do you see identity – especially for nonhumans like AI agents and software workloads – shaping the next phase of enterprise protection?
Renee Guttmann: Identity and access management (IAM) controls are the foundation of enterprise security. However, the rapid adoption of AI has created a gap where security risks are outpacing most organizations’ ability to respond. Traditional IAM frameworks – built for static systems and human users – are ill-equipped to handle the emerging risks tied to AI and non-human identities. Today, non-human identities outnumber human ones by ratios exceeding 80:1, depending on the organization. The next evolution of enterprise protection focuses on enabling innovation to move quickly without sacrificing control – allowing non-human identities to operate freely, but only within the defined boundaries of who they are and what they’re authorized to do.
Many of your CISO roles involved translating risk into business language. How do you think security leaders should approach explaining “non-human access” risk to executives and boards.
RG: Boards have access to excellent training opportunities through organizations like the NACD, including specialized courses and reference materials on artificial intelligence. While it’s critical to help boards understand AI-related risks, that discussion must also include practical strategies for mitigating those risks.
CISOs need to be ready to address the concept of “non-human access” and its potential business impacts. They should be able to explain what could happen if an AI agent’s credentials are misused – what processes might halt, what data could be exposed or compromised, and how trust in information could be affected.
Most importantly, CISOs must focus on how to empower their organizations to implement foundational controls and governance practices that both reduce risk and enable responsible AI adoption.
Traditional IAM frameworks...are ill-equipped to handle the emerging risks tied to AI.
– Renee Guttmann, Veteran CISO and Aembit Adviser
You’ve built and led cybersecurity programs at some of the world’s most recognized brands. What patterns or lessons from those experiences do you see applying to how organizations should manage access for AI agents today?
RG: The most effective cybersecurity programs take a proactive approach to risk – building systems that minimize unnecessary exposure, cost, and remediation time from the start. As AI becomes embedded in critical systems, it’s essential to implement governance and security controls that developers and AI creators can easily adopt. This includes defining robust lifecycle processes to ensure AI workload credentials have the right privileges, are not shared, and are properly retired when no longer needed.
The principles of identity and access management (IAM) that apply to humans and system accounts also apply to AI. The key difference is that regulations and compliance frameworks have yet to catch up. Best practices for securing AI workloads are emerging now, and organizations must act early to stay ahead of evolving risks. Given the rapid scale and speed of AI adoption, waiting to respond later will make mitigation exponentially harder.
As someone who advises startups and large enterprises alike, what qualities do you look for in a security technology company that make you want to get involved?
RG: When I partner with a company, I look for evidence that they’re addressing a real, meaningful problem. Identity management is fundamental to reducing organizational risk, and AI identity risk is an emerging area that demands focused attention. I’m passionate about supporting an organization like Aembit in the development and awareness efforts needed to help organizations adopt secure, effective AI identity solutions. I am excited to be building best practice for the successful adoption of AI.
Agentic AI seems poised to change how organizations operate. What excites you most, and what worries you most, about this shift?
RG: Most organizations are exploring how AI can advance their strategic goals. The pace of these initiatives is both remarkable and, at times, a bit unsettling. Many cybersecurity professionals I speak with say their teams struggle to keep up with the rapid rollout of AI projects – and are often left out of the process altogether. What gives me hope is that forward-thinking cybersecurity teams have a real opportunity to shift this dynamic by focusing on solutions that enable innovation rather than slow it down.