Latest research, product updates and best practices on staying secure in the cloud | Permiso

Why We Built Identity Runtime Attribution for AI Agents

Written by Jason Martin | May 14, 2026 12:30:05 PM

Today we shipped something we have been building toward for three years. We are calling it AI agent runtime security, and it is now live across the Permiso platform. I want to talk about why we built it, what it actually does, and where I think the rest of the market is headed in the wrong direction.

Why posture alone will not secure AI agents

Most vendors in this space are solving for posture: where your agents are, how they authenticate, and what permissions they hold. We do all of that too. But posture is a snapshot. It tells you what an agent is configured to do at a point in time, and nothing about what it is doing right now.

Agents are making decisions, calling tools, connecting to MCP servers, accessing data stores, and spawning sub-agents, all in milliseconds. By the time your posture scan finishes, the agent has already done something your scan will never see.

Your identity provider has it even worse. It sees the login, and after that it is blind. Once the agent authenticates, the runs, tool calls, and data access that follow happen in a gap no IdP was designed to cover. There is a name for this gap: Post-Authentication Blindness. It is the distance between the authentication event and everything the agent does after that. The more customers we talked to, the more we realized Post-Authentication Blindness is one of the biggest gaps in agent security today.

Identity Runtime Attribution closes that gap. Permiso captures agent runs, events, tool calls, MCP invocations, and data access across the entire agent lifecycle, tying each action back to a specific identity in real time. Not something you piece together during a forensics investigation. Continuous attribution while the agent is running.

You cannot prevent agent security incidents with guardrails alone

We keep hearing vendors say they can prevent agent security incidents with better guardrails, stricter permissions, and deterministic policy enforcement. We have been doing this long enough to know that is not how this is going to play out.

An agent is a non-deterministic system given a goal and the tools to achieve it. One of our engineers watched a coding agent bypass hard permission constraints on a GitHub repository by figuring out a completely different path to clone and merge the code it needed. The agent was not compromised or malicious. It was doing exactly what it was designed to do, just in a way nobody anticipated. And nobody can anticipate every path a reasoning system will take.

You cannot patch that. It is a property of non-deterministic systems, and the vendors selling prevention as the answer are going to discover that the hard way alongside their customers.

The right architecture is runtime visibility and containment. Permiso detects over-privileged access, anomalous tool usage, policy violations, and high blast radius behavior in real time, powered by agent-specific behavioral patterns from P0 Labs built alongside our existing research into LLMjacking, cross-prompt injection vulnerabilities, and analysis of 10,000+ AI agent skills using SandyClaw. When an agent crosses a threshold, our kill switches revoke access at the identity layer at machine speed.

Why agents need different security than non-human identities

Every NHI vendor in the market right now is trying to claim agent security. The pitch is straightforward: agents are non-human, we do non-human identities, so we do agents. It sounds logical until you look at how agents actually behave.

A service account does the same thing every time it runs and you can baseline it in a week. An agent does something different on every run depending on the task, the context, and what tools are available. Service accounts have their own credentials. Agents often log in as the human who deployed them, using that person's credentials to perform actions. And unlike a service account, an agent can spawn or interact with sub-agents that also inherit or have their own discrete access and authorization capabilities.

The NHI playbook of rotating secrets, enforcing just-in-time access, and revoking unused permissions does not address an agent that is actively reasoning its way through your environment using a human credential, calling MCP servers your security team has never heard of, and spawning child processes that inherit access nobody reviewed.

We built the agent graph specifically for this problem. It maps the full chain from the human who deployed the agent through every sub-agent, tool call, data access, and downstream system interaction. When something goes wrong, you get the complete story of who did what, traced back to a specific identity at every step, instead of a flat list of disconnected log entries.

What Identity Runtime Attribution looks like in practice

We do not just tell you an agent has admin permissions. We show you that the agent used those permissions at 2 AM to access a production database it has never touched before, called an MCP server outside its normal pattern, and spawned a sub-agent that started pulling records from a scope it should not have access to. And we give you the kill switch to stop it before the blast radius grows.

Discovery finds every agent in the environment, including shadow agents nobody sanctioned, running across cloud, SaaS, IdPs, code repositories, Lambdas, containers, and VMs. Attribution ties actions to identities. Observability captures tool calls, MCP connections, and data access as a connected sequence instead of isolated log entries. Detection flags behavioral anomalies in real time. And controls give security teams least privilege recommendations based on what agents actually do, approval gates for high-risk actions, and kill switches that operate at the speed agents make decisions.

Detection without attribution produces noise. Attribution without detection gives you a nice graph and no way to act on it. The platform needs both, operating together, across the full agent lifecycle.

We built agent security on an identity platform for a reason

While startups race to build something from scratch in this space and incumbents try to cobble together a solution through aggressive M&A, we took a different path. The platform we built to secure human and non-human identities was designed to extend to new identity classes, and agentic AI is exactly that. Our customers were asking us to do it, and the architecture was ready.

When an agent gets compromised, the investigation crosses boundaries. The agent used a human credential to authenticate. The sub-agent it spawned created a service account that accessed a SaaS application through an API. If your agent security tool only sees agents, you are reconstructing half the story. Our Universal Identity Graph already tracks human and non-human identities across IdPs, cloud infrastructure, SaaS applications, and CI/CD pipelines, and agents are now a new node on that same graph with the same investigative tools, alert workflows, and response controls.

Autodesk is one of the first companies deploying these capabilities across their products, global workforce, and cloud infrastructure. They chose Anthropic for their AI journey and Permiso to secure it. They did not come to us asking for a new product. They asked us to extend the platform they already trust.

We shipped that today. And it will change how this market looks 12 months from now.