Context Engineering for Agentic AI: Beyond Authentication to Dynamic Authorization
AI Security Series #31Traditional authentication and authorization models assume a single user logging into a single application to access a single resource. That model breaks down when you introduce autonomous AI agents. An agent doesn't log in once—it acts continuously across multiple sessions, delegates tasks to other agents, and accesses dozens of resources dynamically based on evolving objectives. IBM's Grant Miller explains in a new video why context engineering—not just prompt engineering—is the security control healthcare organizations need to understand.
If you're deploying agents for prior authorization, clinical documentation, or billing workflows, this matters now. The agent making decisions about patient data isn't operating within a traditional access control boundary. It's navigating a dynamic environment where what it should be allowed to do depends on six context dimensions that change during execution. Here's what you need to know.
Traditional Authentication vs. Agentic Context
Traditional systems center on a straightforward trinity: user, application, resource. A clinician logs into the EHR (user authenticates to application), requests a patient record (application verifies authorization), and accesses the data (resource grants access based on role). The security boundary is clear. The access decision happens once at login, and subsequent actions inherit that authorization context.Agentic systems break this model. An agent isn't a user who logs in. It's an autonomous system that acts across multiple sessions without re-authentication. The agent doesn't request access to a single resource—it dynamically determines which tools to call and which data to retrieve based on its reasoning about the current objective. The security decision isn't "does this user have access to this resource?" It's "should this agent be allowed to do this action, in this situation, with this data, given its current objective?"
That shift requires a different authorization framework. Traditional role-based access control (RBAC) assumes static permissions. Agentic systems need dynamic authorization that evaluates context at decision time, not just at login time.
The Six Dimensions of Agentic Context
Context engineering recognizes that agent behavior depends on six variables that traditional systems don't track. Healthcare security teams need to understand each dimension because they're all potential attack surfaces.1. User Prompt
The explicit instruction the user gave the agent. In healthcare terms, this is "approve this prior authorization request" or "summarize this patient's recent labs." The prompt defines the agent's objective, but it's not the only input that shapes behavior. Prompt injection attacks exploit this by embedding hidden instructions that redirect the agent's objective.For healthcare, user prompt context needs validation: does this prompt align with what this user should be asking this agent to do? A billing agent receiving prompts about clinical decision support should trigger an alert. The prompt itself is untrusted input that needs scrutiny.
2. Situational Context
The environment state when the agent runs. What time is it? What's the current system load? What recent events occurred? For healthcare agents, situational context includes things like: Is this a HIPAA audit period? Is the system in maintenance mode? Did a security incident just occur?Situational context affects authorization decisions. A clinical documentation agent might have broader access during normal operations but restricted access during a security event. The same agent, same user, same prompt—different authorization outcome based on situation.
3. Resource Context
What tools and capabilities are available to the agent? An agent with access to EHR write APIs has different risk exposure than an agent with read-only access. Resource context defines the agent's capability boundary—what it can do if it decides to do it.For healthcare, resource context is where least-privilege principles apply. A prior authorization agent needs read access to clinical guidelines and patient records, but it doesn't need write access to billing systems. Resource context should be scoped to the agent's legitimate workflow, not granted broadly.
4. User Context
Who initiated the agent, what role do they have, and where are they located? User context carries the human's authorization into the agent's execution. If a nurse initiates a documentation agent, the agent inherits constraints tied to the nurse's role—it shouldn't access records outside that nurse's patient panel or department.User context also includes location. A documentation agent initiated by a remote user connecting from an unusual geography should face additional scrutiny before accessing PHI. The user's context shapes the agent's authorization boundary.
5. Model Context
What model is the agent using, what are its known limitations, and what training data informed its behavior? Different models have different capabilities and different vulnerabilities. A healthcare agent using a general-purpose model might hallucinate drug interactions. An agent using a fine-tuned clinical model might perform better on medical reasoning but worse on general tasks.Model context also includes known vulnerabilities. If a particular model version has documented prompt injection weaknesses, agents using that model need additional input validation. Healthcare organizations should track which models their agents use and what security characteristics those models have.
6. Task History (Memory)
What has the agent done in previous interactions? Task history is the agent's memory—both short-term (within this session) and long-term (across sessions). An agent that accessed sensitive records in a previous session carries that history into authorization decisions for the current session.For healthcare, task history affects audit trails and anomaly detection. An agent that normally processes 10 prior authorization requests per hour suddenly processing 100 requests should trigger review. Task history provides the baseline for detecting abnormal behavior.
Context Engineering vs. Prompt Engineering
Prompt engineering focuses on crafting the user's input to get better model outputs. Context engineering is broader—it's about managing all six context dimensions to ensure the agent operates within appropriate boundaries. Prompt engineering is "how do I ask the question?" Context engineering is "what does the system know, what is the user allowed to do, and how do I retrieve the right information to make that decision?"For healthcare security teams, the distinction matters because prompt engineering alone doesn't address authorization. You can write perfect prompts and still have an agent that accesses data it shouldn't because the resource context, user context, or situational context wasn't properly evaluated.
Context engineering treats authorization as a runtime decision informed by all six dimensions, not a static permission granted at deployment.
What This Means for Healthcare Agent Deployments
Healthcare organizations deploying agents need to translate these six context dimensions into concrete security controls. Here's how.Validate User Prompt Context
Don't trust the prompt as input. Implement input validation that checks whether the prompt aligns with what this user, in this role, should be asking this agent to do. A billing agent should reject prompts about clinical decision support. A documentation agent should reject prompts about payment processing. Prompt validation is your first defense against goal hijacking.Incorporate Situational Context into Authorization
Agent permissions should be situational, not static. During normal operations, a clinical documentation agent might have broad access. During a security incident or audit period, that access should automatically narrow. Situational context lets you implement dynamic authorization that responds to environmental state, not just user identity.Enforce Least-Privilege Resource Context
Scope every agent's tool access to its legitimate workflow. Map the agent's intended function to the minimum set of tools and APIs it needs. A prior authorization agent doesn't need EHR write access. A scheduling agent doesn't need billing system access. Resource context defines the blast radius if the agent is compromised—keep it minimal.Carry User Context Through Delegation
When an agent delegates to another agent, user context must propagate. If a nurse-initiated intake agent calls an insurance verification agent, the insurance agent should inherit the nurse's authorization constraints. Don't let delegation escalate privileges. User context should flow through the entire agent chain, not get reset at each delegation boundary.Track Model Context for Vulnerability Management
Maintain an inventory of which agents use which models. When a model vulnerability is disclosed, you need to know which agents are affected. Model context also informs risk scoring—agents using models with known prompt injection weaknesses need additional monitoring and input validation.Leverage Task History for Anomaly Detection
Build behavioral baselines from task history. How many records does this agent typically access per session? What resources does it normally call? What time of day does it usually run? Task history provides the data for anomaly detection that identifies compromised or misbehaving agents.The Authorization Question Changes
Traditional systems ask "does this user have permission to access this resource?" Agentic systems ask "should this agent be allowed to perform this action, given the user's role, the current situation, the agent's capabilities, the model's characteristics, and the agent's history?"That's a more complex question, but it's also a more accurate one for autonomous systems. Healthcare agents aren't static—they adapt, delegate, and make decisions dynamically. Authorization needs to adapt with them.
Context engineering provides the framework for building that adaptive authorization layer. The six dimensions—user prompt, situational context, resource context, user context, model context, and task history—are the inputs to runtime authorization decisions that determine what an agent can do at any given moment.
Practical Implementation
Healthcare organizations don't need to build context engineering frameworks from scratch. The principles translate to existing security controls with some adaptation.Start with resource context—implement least-privilege tool access for every agent. That's the highest-impact control with the clearest implementation path. Scope each agent's API access, database permissions, and tool calls to its documented workflow.
Add situational context through policy gates. Define security states (normal operations, audit period, incident response) and adjust agent permissions based on current state. This is dynamic authorization without requiring new infrastructure—you're just making authorization decisions state-aware.
Instrument task history for anomaly detection. Log what each agent does—which tools it calls, which records it accesses, how long each action takes. Build baselines over 2-4 weeks of normal operation, then alert on deviations. Task history turns into behavioral monitoring with minimal additional tooling.
Validate user prompts at the application layer. Before passing a prompt to the agent, check whether the prompt type matches what this user should be asking this agent. This is input validation extended to semantic content, not just syntax.
The hardest dimension to implement is model context because it requires tracking which agents use which models and maintaining vulnerability intelligence for each model. Start simple: document which agents use which models. When a model vulnerability is disclosed, you'll at least know which agents need attention.
Why This Matters Now
Healthcare organizations are moving from pilot agents to production deployments. Prior authorization agents, clinical documentation agents, and billing agents are handling real workflows with real patient data. Traditional authentication and authorization models weren't designed for these systems, and trying to force-fit RBAC onto autonomous agents creates security gaps.Context engineering provides the vocabulary and framework for building authorization controls that match how agents actually work. The six dimensions aren't theoretical—they're the variables that determine what an agent does when it executes. Healthcare security teams need to understand these dimensions because each one is both an authorization input and an attack surface.
Grant Miller's video is worth watching in full. It's 13 minutes that will save hours of debate about how to secure agentic systems. The shift from "user logs in" to "agent operates continuously" requires rethinking how authorization works. Context engineering is that rethinking, translated into practical controls.
This is entry #31 in the AI Security series. For related coverage, see OWASP Top 10 for AI Agents: The Security Risks Healthcare Organizations Need To Address.