ITDR Buyer's Guide 2026
how to evaluate solutions that cover your full attack surface
Identity threat detection and response (ITDR) has moved from an emerging category to a purchasing priority for most enterprise security teams. But as the market has grown, so has the gap between what vendors claim and what they actually cover. The question security leaders face in 2026 isn't whether they need ITDR — it's whether their current or prospective solution can actually detect attacks that span identity providers, cloud infrastructure, SaaS applications, and CI/CD pipelines in a single chain. This guide gives practitioners and security leaders the framework to find out.
The Identity Attack Surface in 2026: Human, Non-Human, and AI
The identity attack surface in 2026 has three distinct components, each with different risk profiles and monitoring requirements. Human identities — employees, contractors, and vendors — remain the most targeted through social engineering, credential theft, and phishing. Non-human identities (NHIs) — access keys, service accounts, API tokens, OAuth applications, and automation credentials — now outnumber human identities by 10:1 or more in most cloud environments and represent the fastest-growing, least-monitored category. AI identities — employees consuming AI services, developers deploying AI systems, and autonomous AI agents operating with real credentials — have introduced a third attack vector that most ITDR tools were not designed to address at all.
Each category requires different detection logic, different data sources, and different response playbooks. A solution that covers human identities well but ignores NHIs leaves a sprawling attack surface unmonitored. A solution that handles both but has no framework for AI agent credentials will miss the next class of identity-based attacks that P0 Labs research has already documented in the wild.
The practical implication: when evaluating ITDR vendors, you need explicit confirmation of coverage across all three identity types — not just a reference to "comprehensive identity security" in a datasheet.
Cross-Boundary Detection: The Defining ITDR Evaluation Criterion.
Every major identity-based breach in recent years — Okta (2023), MGM Resorts (2023), Snowflake customers (2024), Marks & Spencer (2025) — followed the same pattern: threat actors compromised identity infrastructure, then moved laterally across multiple cloud service layers. In the MGM breach, LUCR-3 (Scattered Spider) moved from an identity provider compromise into AWS, Slack, and source code repositories in a single attack chain. Dwell time in the IdP and SaaS layers was 69 hours. Dwell time in IaaS was three hours.
A solution monitoring only the identity provider would have detected the initial MFA compromise — and missed everything that followed. A solution monitoring only AWS would have seen service enumeration and logging changes with no context for who was behind them. The only way to reconstruct that attack chain and attribute every action is to stitch identity sessions across authentication boundaries in real time.
This is what makes cross-boundary detection the defining evaluation criterion for ITDR in 2026. When evaluating vendors, the relevant questions are: Does the solution ingest telemetry from IdPs, IaaS, SaaS, and CI/CD simultaneously? Can it correlate events across those layers into a unified identity session? Can it attribute actions back to the originating identity even when access was through federated roles, temporary credentials, or shared service accounts? Vendors who can't answer yes to all three are covering part of your attack surface, not all of it.
Download the full evaluation framework with RFP templates
How to structure an ITDR vendor evaluation in 2026
An effective evaluation framework for ITDR in 2026 should assess vendors across six capability pillars: discovery (comprehensive inventory of all identity types), posture management (continuous risk scoring and attack path visualization), detection (multi-log telemetry correlation across cloud service boundaries), hunting (support for human-led investigation and proactive threat hunting), response (workflow integrations and MTTR measured in minutes, not hours), and AI and NHI security (dedicated capabilities for the fastest-growing attack surface categories).
For each pillar, the evaluation should distinguish between what a vendor claims and what it can demonstrate in your environment. The RFP evaluation templates in the full guide map specific capabilities to MITRE ATT&CK technique categories for identity provider detections, IaaS detections, SaaS detections, CI/CD detections, and — notably — AI identity detections, a category that didn't exist in most evaluation frameworks two years ago.
Two areas deserve particular scrutiny. For NHI security, confirm the solution can map creation chains showing which humans created which non-human identities, detect orphaned NHIs with active credentials but no current owner, and monitor NHI behavior at runtime using baselines trained on machine access patterns rather than human ones. For AI identity security, confirm the solution can inventory AI agents and map their credentials back to the human identities that provisioned them, detect shadow AI usage through personal accounts, and identify anomalous agent behavior including prompt injection and excessive data access. P0 Labs' OpenClaw research identified 341+ malicious AI agent skills actively delivering credential-stealing malware — which means AI identity risk isn't a theoretical future concern. It's already in the RFP.
Use this framework to evaluate whether your ITDR solution covers your full attack surface — human, non-human, and AI. Download the 2026 Buyer's Guide to get the complete RFP templates and evaluation criteria.




.png?width=800&height=206&name=AI%20Visibility%20Banner%20(970x250).png)