Agentic Runtime Security: IBM's Five Imperatives for Non-Human Identities
AI Security Series #28Every human identity in your enterprise now spawns dozens of non-human identities. AI agents, service accounts, API tokens, automation scripts — they're multiplying faster than security teams can track them. In a new IBM Think video, the company lays out a framework for what they call "agentic runtime security" — securing non-human identities at the speed agents actually operate.
The numbers are stark: for every human identity in an AI-enabled enterprise, there are now 45 to 90 non-human identities. Other research puts the ratio even higher — Rubrik Zero Labs recently reported 82 non-human identities for every human user. Whatever the exact number, the trend is clear: machine identities have overwhelmed human identities, and traditional IAM wasn't built for this.
The Security Gaps in Agentic AI
IBM identifies four fundamental security gaps that emerge when AI agents operate in enterprise environments:Accountability
Agents lack unique identifiers, making it difficult to track their actions. When something goes wrong, you can't answer the basic question: which agent did this? Logs show that authentication succeeded, but they don't show who authorized the action or under what constraints. Without clear accountability, incident response and compliance reporting become guesswork.Overprivilege
Agents routinely possess more permissions than they need. Static credentials with broad access violate least privilege principles from day one, and the problem compounds as agents proliferate. An agent that needs read access to one database table gets credentials that work across the entire data warehouse. The blast radius of a compromise scales accordingly.Delegation and Impersonation
Agents acting on behalf of users create complex authorization chains. When Agent A delegates to Agent B, who delegates to Agent C, the audit trail becomes unclear. Worse, "lazy" agents might inherit user identities directly — breaking the audit trail entirely and making it impossible to distinguish between human and agent actions.The Last Mile Problem
The final interaction between an agent and a backend system happens at machine speed. By the time a human could review and approve an action, the agent has already moved on to the next task. Traditional approval workflows that work for human access become bottlenecks that either slow agents to uselessness or get bypassed entirely.Five Imperatives for Secure AI Deployment
IBM proposes five imperatives for organizations deploying agentic AI:1. Register Agents
Assign unique identities to every agent and quantify their risk profiles. You can't secure what you can't see, and you can't see agents that don't have identities. Registration isn't just about inventory — it's about understanding which agents exist, what they can do, and what risk they represent. This is the foundation everything else builds on.2. Strip Privileges
Replace static access with dynamic, just-in-time privileges. Agents should request access for specific tasks, receive temporary credentials scoped to that task, and lose access when the task completes. No standing privileges. No persistent credentials. Every access grant is temporary by default.3. Tie Actions to Intent
Ensure every agent action is auditable back to the user's original intent. When an agent takes an action, you should be able to trace the chain: which user initiated the workflow, what was the stated purpose, which agents were involved, and whether the actions stayed within scope. Without this traceability, you can't answer why something happened — only that it did.4. Enforce at Point of Use
Implement real-time risk and policy checks for every database connection — what IBM calls "the last hop." Policy enforcement can't happen only at the perimeter or only at authentication time. It needs to happen at the moment of access, every time. This means runtime checks that evaluate whether this specific action, by this specific agent, at this specific time, is permitted.5. Proof of Control
Maintain full auditability for compliance in regulated industries. It's not enough to have controls — you need to prove those controls are working. Immutable logs, comprehensive audit trails, and the ability to demonstrate to regulators that you know what your agents did and that those actions were authorized.The Technology Stack
IBM outlines three technology layers required to implement these imperatives:Orchestration
Managing traffic between human and non-human identities. This is the control plane that coordinates agent access across the environment — routing requests, managing credentials, and ensuring agents interact with systems through governed channels rather than direct connections.Governance
Applying policies across the entire continuum of access. Governance frameworks need to span human users, service accounts, API tokens, and AI agents with consistent rules. Fragmented governance — different policies for different identity types — creates gaps that attackers exploit.Observability
Two dimensions here: posture management (consolidating secrets, identifying misconfigurations, understanding your identity attack surface) and threat management (detecting anomalous behavior in real-time). Posture tells you where you're vulnerable. Threat management tells you when you're under attack.What This Means for Healthcare
Healthcare organizations face unique challenges with non-human identity proliferation:The PHI Access Problem
Every AI agent that touches patient data creates a non-human identity that needs HIPAA-compliant access controls. An agent that helps with clinical documentation needs EHR access. An agent that assists with billing needs claims data. An agent that supports care coordination needs access across multiple systems. Each one multiplies your identity management burden and your compliance exposure.Audit Trail Requirements
HIPAA requires accounting of disclosures — you need to know who accessed what PHI and why. When "who" is an AI agent acting on behalf of a user acting on behalf of a patient, the audit trail complexity explodes. IBM's "tie actions to intent" imperative isn't optional in healthcare — it's a regulatory requirement.The Integration Challenge
Healthcare AI agents will integrate with EHRs, lab systems, pharmacy systems, imaging archives, and third-party services. Each integration creates new non-human identities and new access paths to sensitive data. Without a unified governance framework, you end up with fragmented controls and blind spots between systems.Real-Time Enforcement
Clinical AI agents need to operate fast enough to be useful in care delivery. You can't have five-minute approval workflows for an agent assisting with medication reconciliation. But you also can't give agents standing access to all medications across all patients. The last-mile enforcement problem is particularly acute in healthcare, where both speed and precision matter.The Bigger Picture
Non-human identity management is becoming the central challenge of enterprise security. IBM's 2026 X-Force report found that 30% of data breaches now start with identity-based attacks — and that number will likely increase as agent deployments scale.The organizations that deploy AI agents successfully will be the ones that treat non-human identity management as foundational architecture, not an afterthought. The five imperatives IBM outlines aren't revolutionary — register, strip privileges, tie to intent, enforce at point of use, prove control — but they represent a significant shift from how most organizations currently manage machine identities.
Legacy IAM was built for humans. Agentic AI demands something different: identity management that operates at machine speed, scales to machine volumes, and maintains human-understandable accountability throughout.
This is entry #28 in the AI Security series. For related coverage on agent security, see Zero Trust for AI Agents and Human-in-the-Loop Isn't Optional.