Introducing SandyClaw - The First Dynamic Sandbox for AI Agent Skills and Prompts

[GET ACCESS]
Close Icon
Linkedin
Linkedin
COMPARISON

Securing AI identities: a 90-day roadmap

Jan 2026 • Aditya Vats • Identity Security
AI Security Roadmap Primary Image

AI has introduced three new identity classes into enterprise environments simultaneously: employees using AI tools through personal and corporate accounts, developers building and deploying AI pipelines with broad access to production infrastructure, and autonomous agents operating on service account credentials with permissions well beyond their actual function. Traditional identity security tools were built to track static configurations. They weren't designed for this.

The result is a coverage gap that most organizations have only recently started to measure. Permiso research indicates roughly 80% of AI activity in enterprise environments is invisible to traditional security tools — not because organizations haven't tried, but because runtime AI behavior doesn't surface through the mechanisms those tools were built to monitor. Closing that gap requires a structured approach, sequenced to build coverage before detection. This guide walks through a practical 90-day implementation path organized around three phases: Discover, Protect, and Defend.


Phase 1: Establishing AI identity visibility (Days 1-30)

The first step in any AI security program is figuring out what's actually running. Most organizations have three to five times more AI identities active than their license counts suggest. Shadow AI adoption, personal account usage, and developer-deployed agents on shared credentials all contribute to an actual inventory that looks nothing like the authorized one.

The goal of the first 30 days is to move from roughly 20% visibility to 100% across all three AI identity classes. For AI users, that means capturing runtime activity rather than relying on license data: who is using which tools, from which accounts, accessing which data. For AI builders, it means cataloging developer accounts with access to model training pipelines, deployment infrastructure, and API keys. For AI agents, it means inventorying every autonomous system, service account, and API integration, and documenting what data each one can reach.

By Week 4, your team should have a visibility report showing the gap between assumed and actual AI usage, a list of high-risk patterns (accounts accessing sensitive data outside normal parameters, agents with permission excess), and alignment across security, engineering, and compliance on what the real exposure looks like. 

Permiso Discover Banner (970x250)

Phase 2: Reducing attack surface through permission rightsizing (Days 31-60)

With accurate visibility in place, the second phase addresses what is consistently the largest structural risk in AI environments: permission excess. Permiso research shows AI agents routinely operate with up to 90% unused permissions, developers often retain broad access to production model infrastructure long after it's needed, and AI users frequently connect to corporate systems through personal accounts with no governance controls in place.

The remediation approach follows a deliberate sequence, ordered by business disruption risk. AI agents come first because revoking unused permissions from non-human identities carries the lowest likelihood of workflow impact. AI builders come next, with right-sizing focused on cross-environment access — specifically, development accounts that shouldn't have production model access. AI users come last, with enforcement of SSO for sanctioned tools and blocking of personal account usage.

One Permiso customer completed this phase by Day 45. In the process, they identified 47 instances of confidential data in ChatGPT prompts, blocked personal AI account usage across the organization, and closed a data leakage path that had been entirely invisible through static tooling. Organizations following this approach consistently achieve a 70-90% reduction in AI identity attack surface by Day 60. 

ISPM Cheat Sheet (728x90)

Phase 3: Building real-time detection for AI identity threats (Days 61-90)

Detection deployed on incomplete coverage produces false confidence. This phase comes third because behavioral baselines and detection rules are only as reliable as the identity inventory they're built on. With visibility and permission rightsizing complete, the final 30 days focus on operationalizing detection calibrated to how AI identities actually behave.

The architecture starts with per-identity behavioral baselines: what normal looks like for each AI user, each builder account, and each agent. Detection rules can then be tuned to flag meaningful anomalies: unusual prompt patterns or sudden data volume spikes for AI users, after-hours model weight modifications or new service account creation for builders, and 10x normal data access or privilege escalation attempts for agents. MITRE ATLAS provides the threat framework for mapping detection rules to known AI-specific attack patterns, including model poisoning, pipeline compromise, and LLMjacking.

Automated response workflows complete the picture. When an AI agent attempts cross-tenant authentication, or a developer account modifies model weights at 2:00 AM then creates a new service account with production access, the system should disable the account, revoke active sessions, and alert the SOC with full context and recommended containment steps — before a human manually pieces together what happened. The targets from this phase: AI incident response time under 15 minutes, false positive rate under 10%. Permiso Defend includes 1,500+ pre-built detection signals to support this deployment. You can see our award winning threat detection capabilities in this product tour.



 

Frequently Asked Questions

1. How long does it take to implement AI identity security?

A structured implementation can achieve full AI identity coverage in 90 days across three phases. Days 1-30 focus on discovery: inventorying all AI users, builders, and agents, including shadow AI usage that most organizations don't know exists. Days 31-60 focus on reducing the attack surface by rightsizing permissions, starting with AI agents (lowest disruption risk), then builders, then users. Days 61-90 focus on deploying runtime detection with per-identity behavioral baselines calibrated to how AI identities actually behave. The sequencing matters because detection deployed on incomplete coverage produces false confidence.

2. Why do most security tools miss AI identity activity?

Traditional identity security tools were built to track static configurations and human access patterns. AI identities behave differently: employees authenticate to AI services through federated login and then activity becomes invisible, developers cross multiple authentication boundaries simultaneously, and agents generate thousands of API calls with dynamically scoped permissions. Permiso research indicates roughly 80% of AI activity in enterprise environments is invisible to traditional tools, not because organizations haven't tried, but because runtime AI behavior doesn't surface through the mechanisms those tools were built to monitor.

3. What should security teams do in the first 30 days of an AI security program?

The first 30 days should focus entirely on discovery and visibility. Most organizations have three to five times more AI identities active than their license counts suggest due to shadow adoption, personal account usage, and developer-deployed agents on shared credentials. The goal is to move from roughly 20% visibility to full coverage across all three AI identity classes: users, builders, and agents. By the end of the first month, security teams should have a visibility report showing assumed versus actual AI usage, a list of high-risk patterns, and cross-functional alignment on real exposure.

4. How should organizations reduce their AI identity attack surface?

Permission rightsizing should follow a deliberate sequence ordered by business disruption risk. AI agents come first because revoking unused permissions from non-human identities carries the lowest likelihood of workflow impact. AI builders come next, with right-sizing focused on cross-environment access, specifically development accounts that shouldn't have production model access. AI users come last, with enforcement of SSO for sanctioned tools and blocking of personal account usage. Organizations following this approach consistently achieve a 70-90% reduction in AI identity attack surface within 60 days.

5. What does the 90-day AI identity security roadmap cover?

The roadmap provides a phased implementation path for securing AI identities across the enterprise. Phase 1 (Days 1-30) covers discovery and inventory of all AI users, builders, and agents, including shadow AI. Phase 2 (Days 31-60) covers attack surface reduction through permission rightsizing, sequenced by disruption risk. Phase 3 (Days 61-90) covers runtime detection with per-identity behavioral baselines and automated response workflows. The guide includes specific milestones, customer examples, and detection targets to help security teams measure progress at each phase.