[Webinar] What Insider Threats Actually Look Like - A Lesson From the Rippling Lawsuit 

[WATCH NOW]

Announcing Permiso Discover - A free identity inventory & visibility for human, non-human and AI

[Join the Waitlist]
Hamburger
Close Icon
Linkedin
Linkedin
Illustration Cloud

Rethinking AI Security: Every Interaction Is About Identity

The rise of artificial intelligence (AI) has been nothing short of revolutionary, but with every new frontier comes a unique set of challenges. For many organizations, the promise of AI is tempered by a growing unease about its security. The AI security landscape is vast, complex, and rapidly evolving, often feeling like an overwhelming labyrinth with new threats emerging daily. Terms like "data poisoning," "model evasion," and "prompt injection" dominate the conversation, and the sheer volume of potential vulnerabilities can paralyze even the most seasoned security teams.

This is a familiar feeling for anyone who has witnessed the evolution of enterprise security over the past three decades. We’ve seen similar moments of profound change - from the rise of client-server architectures to the shift to the cloud. Each new era introduced a seemingly insurmountable array of security challenges. Yet, with each transition, a core, unifying principle emerged to bring clarity and control to the chaos. For a long time, that principle was network security. Today, it’s identity.

Identity has become the bedrock of modern security. The shift from a perimeter-based model to one where users, devices, and workloads can access resources from anywhere has made identity the new security perimeter. We no longer secure static networks; we secure the dynamic relationships between identities and the data they access. This fundamental shift has provided a crucial lens through which to simplify and manage the complexity of the digital world.

So, as we face the new challenges of AI, it’s only natural to ask: what if we apply this same powerful lens to the problem of AI security? What if we move beyond the technical jargon and focus on the one constant in every AI system - the identity?

AI Security Today: A Problem Without Boundaries

One of the greatest challenges of AI security is its lack of defined boundaries. AI systems cut across development, infrastructure, users, third-party providers, and autonomous agents. Unlike the shift from on-premises to the cloud, or the adoption of SaaS, AI does not represent a single domain to be secured - it is a fabric woven into every layer of digital architecture.

This lack of boundaries creates uncertainty:

  • Who exactly is responsible for security when AI is co-created across internal builders and external vendors?
  • How can organizations control what data goes into models, given the risks of poisoning, leakage, or unintended exposure?
  • What does trust look like in a world where roles are not just human but also algorithmic?

This problem feels unwieldy because AI introduces identities never seen in traditional security models. But rather than attempt to secure AI in its entirety as a monolith, applying the lens of identity provides a pathway to bring order into the chaos.

The AI Security Problem Is an Identity Problem

At its core, every interaction with an AI system involves an identity. An engineer with a specific role builds a model. A data scientist with a set of permissions trains the model with sensitive data. An end-user with a defined set of access rights uses an application powered by the model. A bot or an agent with its own unique purpose interacts with other systems on the model’s behalf.

Each of these interactions represents a connection between an identity and a resource, and each connection introduces a potential point of vulnerability. Viewing AI security through the lens of identity allows us to simplify the problem from an abstract, technical challenge into a tangible, manageable one. It shifts the focus from securing the "black box" of the AI itself to securing the identities that interact with it.

Instead of getting lost in the weeds of every possible attack vector, we can ask a more fundamental question: who is interacting with this AI, what are they allowed to do, and is that interaction legitimate? This approach breaks down the monolithic problem of AI security into a series of smaller, more defined identity-centric problems.

AI Risks to Manageable Identity Questions

Deconstructing the AI Identity Ecosystem

To truly understand how to apply the identity lens, we must first recognize the key identities that exist within the AI ecosystem. This isn't about traditional users alone; it's about a broader class of entities, each with a unique role and set of permissions.

The Builders: These are the data scientists, machine learning engineers, and developers who create and train the AI models. Their access to sensitive training data and the model’s core architecture is critical. Securing this identity is about ensuring they have the right level of access - no more, no less - and that their actions are auditable and verifiable. An identity lens helps us prevent malicious insiders from injecting backdoors or misusing data.

The Users: These are the employees, customers, and partners who interact with AI-powered applications. Whether they are using a chatbot for customer service or an internal tool for data analysis, their identity determines what they can see and do. An identity-centric approach ensures that a user can’t prompt an AI to access data they aren’t authorized to see or perform actions outside their designated role. It transforms the challenge from a pure prompt-injection problem into an identity and access management problem.

The Agents: As AI becomes more autonomous, we are seeing the rise of AI-powered agents and bots that act on behalf of the organization. These agents have their own identities and are granted permissions to perform specific tasks, such as processing transactions or accessing other services. An identity lens is crucial for securing these agents, ensuring they only operate within their defined boundaries and that their identities are protected from impersonation or misuse.

By framing AI security in terms of these three core identity types, we can move beyond the overwhelming complexity of the technology itself. We are no longer trying to secure an abstract algorithm; we are securing the builders, users, and agents who interact with it. This perspective provides a clear, actionable framework for developing a robust security strategy.

Implementing the Identity Approach

So how does an organization actually implement this identity-based approach to AI security? It starts with discovery, just like any identity initiative.

First, map the AI landscape. Who's using AI? What are they using it for? Which departments have official AI projects? Which ones have shadow AI initiatives? This isn't a witch hunt; it's about understanding the current state.

The discovery process often reveals surprises. The sales team might be feeding customer data into public AI services. Developers might be using AI to generate code that goes directly into production. HR might be using AI for resume screening without considering bias implications.

Next, classify AI resources by risk. Not all AI is created equal. A general-purpose chatbot helping employees write emails is low risk. An AI system making credit decisions is high risk. A model trained on proprietary data represents different concerns than one using only public information.

This classification drives policy decisions. High-risk AI might require multi-factor authentication, enhanced logging, and human approval for certain actions. Low-risk AI might be more broadly accessible with basic controls.

Then, establish governance frameworks. Who can approve new AI initiatives? What's the process for granting AI access? How are AI agents created and managed? What happens when an AI system is decommissioned?

These aren't new questions. They're the same governance challenges organizations face with any technology. The identity framework provides a proven model for addressing them.

Real-World Implementation Patterns

Organizations successfully implementing identity-based AI security follow similar patterns. They start small, typically with a pilot program in a single department or use case. They learn what works and what doesn't without betting the entire enterprise.

They leverage existing identity teams rather than creating separate AI security teams. The identity experts already understand the principles; they just need education on AI-specific applications. This approach builds on existing expertise rather than fragmenting security efforts.

They focus on visibility first, control second. Before implementing strict policies, they monitor AI usage to understand patterns and requirements. This prevents overly restrictive policies that drive shadow AI usage.

They iterate rapidly. AI technology evolves quickly, and security approaches must keep pace. What works today might need adjustment tomorrow. The identity framework provides stability while allowing tactical flexibility.

Wrapping up: Simplicity Through Identity

The power of an identity-centric approach is its ability to simplify. By focusing on the "who," "what," and "how" of interactions with AI, we can establish a clear and consistent security posture across the entire AI lifecycle.

This approach allows us to answer critical questions with clarity:

  • Who is allowed to access and modify our AI models and training data?
  • What data can a specific user see or a bot access when interacting with an AI?
  • Are the interactions with our AI systems legitimate, or are they being used by unauthorized identities?

The answers to these questions are not found in analyzing the minutiae of every possible algorithm vulnerability but in the fundamentals of identity and access management. This is the same principle that brought clarity to the challenges of cloud security and the proliferation of SaaS applications.

By applying the lens of identity, we can begin to build a security framework that is both comprehensive and manageable. It is a framework that brings order to the chaos and provides a clear, proven path forward. It's time to shift our perspective on AI security and recognize that the most powerful tool we have is the one we already trust: identity.

Illustration Cloud

Related Articles

ITDR and Authentication Security: Why Traditional Identity Defense Falls Short in 2025

While organizations have become incredibly sophisticated at detecting network threats and endpoint attacks, they're actually getting worse at catching identity-based threats when they matter most.

15 Questions Everyone Asks About Identity Threat Detection and Response(ITDR)

Identity has become the new battleground in cybersecurity. With over 90% of breaches now involving compromised credentials, organizations are scrambling to understand how to protect their identity infrastructure.Identity Threat Detection and

What Security Teams Can Learn From The Rippling/Deel Lawsuit: Intent Lies in Search Logs

Earlier this week, Rippling announced that it had filed a lawsuit against one of their biggest competitors, Deel. The lawsuit alleges that Deel had placed a ‘spy’ within Rippling in order to harvest confidential sales and business strategy data from

View more posts