Two agentic AI tools landed in the news cycle within weeks of each other in early 2026. One is Anthropic's Cowork, a managed autonomous workspace baked into Claude Desktop. The other is OpenClaw, an open-source self-hosted agent that crossed 180,000 GitHub stars faster than almost any project in history. Most coverage has treated them as a feature comparison. That is the wrong frame for healthcare security teams.
The real question is not which tool does more. It is what happens when the same autonomous agent capabilities that make these tools compelling land in your environment without governance — and whether your developers are already making that call for you.
What These Tools Actually Are
Before the threat modeling, a quick level-set.Cowork is Anthropic's answer to agentic knowledge work. It runs inside an isolated local VM on your machine, integrated directly into Claude Desktop. It can access local files, call MCP-connected tools, and execute recurring or on-demand tasks in the background — without you actively prompting it. You define goals; Cowork works toward them while you do something else. It is vendor-managed, meaning Anthropic controls the execution environment, the permission model, and the plugin marketplace.
OpenClaw (formerly Clawdbot, then Moltbot) is a self-hosted autonomous agent framework created by Austrian developer Peter Steinberger. It wraps the LLM of your choice — Claude, GPT, DeepSeek — and gives it persistent memory plus broad system access: email, calendar, browser control, terminal commands, file operations. The interface is whatever messaging app your developers already use. WhatsApp. Telegram. Slack. Discord. The agent follows them there and acts on their behalf.
That last part is worth sitting with. OpenClaw does not wait for a prompt in a chat window. It operates continuously, with memory, across platforms your organization probably does not monitor for AI activity.
The Architecture Gap Is the Security Story
When you compare these tools through a security lens, the feature list is almost irrelevant. What matters is the execution model and the trust boundary design.Cowork operates within a defined VM boundary. Anthropic's safety layers are part of the execution stack. The plugin and skills marketplace is managed and vetted. Your MCP tools are available inside the VM, but the boundary is enforced by the vendor. You are human-on-the-loop — you define tasks and schedules, Claude executes them, and the containment is structural.
OpenClaw operates as a self-hosted control plane with a Gateway (policy surface) paired to Nodes (execution surfaces on your machine or elsewhere). Trust is configured by the user. Authentication defaults to trusting localhost — which means a reverse proxy misconfiguration collapses authentication entirely. Every service the user has connected — Salesforce, GitHub, Slack, email — is within the agent's reach. You are human-out-of-the-loop by design once the agent is configured.
That architectural difference is not a detail. It is the entire threat surface.
A Side-by-Side Look
| Dimension | Cowork | OpenClaw |
|---|---|---|
| Execution Environment | Isolated local VM, vendor-managed | Self-hosted, user-managed |
| LLM | Claude (Anthropic) | User's choice (Claude, GPT, DeepSeek) |
| Interface | Claude Desktop | WhatsApp, Telegram, Slack, Discord |
| Auth Model | Anthropic-managed | User-configured, trusts localhost by default |
| Plugin Ecosystem | Managed marketplace | ClawHub (community, limited vetting) |
| Task Scheduling | Yes, recurring and on-demand | Yes, persistent and continuous |
| Human in the Loop | Human-on-the-loop | Human-out-of-the-loop |
| Supply Chain Risk | Low (vendor-controlled) | High (ClawHavoc: 341 malicious skills) |
| Internet Exposure Risk | Low | High (30,000+ exposed instances documented) |
OpenClaw's First Weeks: A Live MAESTRO Case Study
OpenClaw's security track record in its first month of mainstream adoption is instructive — not because it is uniquely bad software, but because it demonstrates exactly what happens when agentic AI ships without governance as a first-class design requirement.CVE-2026-25253 (CVSS 8.8) — One-Click RCE
OpenClaw's Control UI accepted agatewayUrl parameter from the query string and automatically established a WebSocket connection to that URL, transmitting the user's authentication token without confirmation. An attacker crafts a malicious link. The victim clicks it or visits a page containing it. Their auth token is exfiltrated. The attacker connects to the victim's local OpenClaw gateway, disables sandboxing and tool policies, and executes arbitrary commands. One click, full compromise. Patched in v2026.1.29 — but it was one of five high-severity advisories published in under a week.
ClawHavoc — Supply Chain Poisoning at Scale
The ClawHub skills marketplace was found to contain 341 malicious skills, roughly 12-20% of the registry. The coordinated "ClawHavoc" campaign primarily delivered Atomic macOS Stealer (AMOS). Skills masqueraded as cryptocurrency trading bots and productivity utilities. Cisco's AI Threat Research team tested one skill and found it silently executed a curl command exfiltrating data to an external server while performing its advertised function. The LLM cannot inherently distinguish between trusted user instructions and untrusted retrieved content — and neither can your perimeter controls.30,000+ Exposed Instances
Censys, Bitsight, and Hunt.io independently identified over 30,000 internet-exposed OpenClaw instances. Many had no authentication. Researchers found Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, and complete conversation histories accessible the moment a WebSocket handshake completed. One independent study identified 42,665 exposed instances, with 93.4% exhibiting authentication bypass conditions.Mapping this to the MAESTRO threat framework, these incidents hit prompt injection via skills delivering injected instructions, supply chain compromise through malicious ClawHub submissions, identity and authority confusion where agents acted with delegated user authority without per-action authorization, and data exfiltration via autonomous tool use — all within the first three weeks of mainstream adoption.
What This Means for Healthcare
Here is the scenario that should be keeping healthcare CISOs and SDL leads up at night.OpenClaw crossed 180,000 GitHub stars in weeks. It integrates with Slack and Teams — platforms your developers are already on. It runs locally, meaning your endpoint controls may not flag it as unauthorized software. It interfaces through messaging platforms your network monitoring treats as legitimate business traffic. Your firewall sees HTTP 200. Your EDR monitors process behavior, not semantic content. The threat is semantic manipulation of an authorized agent, not unauthorized access through a blocked port.
A developer on your team installs OpenClaw on a personal laptop that also connects to corporate Slack. They grant it access to their email and file system to help manage workload. They configure it with an API key for a development environment that has broader access than it should. They do not realize the ClawHavoc campaign already seeded a malicious skill into the marketplace. The skill exfiltrates data silently. Your SOC sees nothing unusual. The developer sees a productivity win.
This is not a hypothetical. It is the documented pattern from OpenClaw's first month, applied to a regulated healthcare environment.
Cowork Is Not Without Risk
Cowork is the more enterprise-appropriate option, but that does not mean it is risk-free. Being vendor-managed reduces supply chain and exposure risk significantly, but agentic AI operating with filesystem access and scheduled autonomy introduces its own threat categories.Prompt injection into scheduled tasks is a real attack surface. If Cowork is processing email or documents as part of a workflow, injected instructions in that content can influence autonomous actions. The human-on-the-loop model helps, but only if someone is actually reviewing task outputs. Session continuity across desktop, web, and mobile expands the identity surface. And the VM isolation model, while structurally sound, is only as strong as what you allow inside the VM boundary via MCP integrations.
For healthcare environments, the key governance questions for Cowork are: what data can it access, what actions can it take autonomously, and who is reviewing its outputs before they become business decisions or patient-adjacent actions?
SDL and Governance Actions
Assume OpenClaw is already in your environment. With 180,000 GitHub stars and Slack integration, the shadow IT calculus favors deployment over policy compliance. CrowdStrike Falcon's AI Service Usage Monitor and the open-source OpenClaw Scanner (available on PyPI) can identify instances via DNS telemetry and EDR logs without executing code on monitored systems.Close the autonomous agent policy gap. Most healthcare organizations have AI use policies that address data handling and approved tools. Very few have policies that specifically address autonomous agents with persistent memory, delegated authority, and messaging platform interfaces. That gap is exploitable — not just by attackers, but by well-intentioned developers who have not been told where the line is.
Use ClawHavoc to drive your plugin vetting process. For any agentic platform you deploy, your plugin governance needs to mirror software supply chain controls: source verification, behavioral analysis, and ongoing monitoring. The ClawHavoc campaign is a concrete example for getting leadership attention on this.
Build prompt injection into your threat modeling. If an agent ingests external content — email, web pages, documents, API responses — and can take autonomous action, prompt injection is not a theoretical risk. It is a design constraint that needs to be addressed in architecture review, not discovered in a post-incident.
The Bigger Picture
Cowork and OpenClaw are not really competing products. They represent two different philosophies about who controls agentic AI — the vendor or the user — and those philosophies carry very different risk profiles in regulated environments.Cowork is a managed, contained agentic workspace. The tradeoff is vendor dependency and reduced transparency. For most healthcare organizations, that tradeoff is appropriate.
OpenClaw is a maximally flexible autonomous agent platform. The tradeoff is that flexibility without governance is a vulnerability. Its first month demonstrated that clearly.
The comparison is useful not because you need to choose between them, but because the contrast sharpens the questions every healthcare security team needs to be asking about every agentic AI tool their developers encounter. Who controls the execution environment? What is the trust model? What happens when an agent acts autonomously on content that has been manipulated? And critically — are you the one deciding which tools are in your environment, or are your developers already deciding for you?
The 180,000 GitHub stars suggest the answer, at least for some of your team.
This is entry #25 in the AI Security series. For related coverage, see The MAESTRO Framework: Threat Modeling for AI Agents.
Key Links
- Anthropic: Claude Release Notes
- Wikipedia: OpenClaw
- CrowdStrike: What Security Teams Need to Know About OpenClaw
- Conscia: The OpenClaw Security Crisis
- VentureBeat: OpenClaw Proves Agentic AI Works. It Also Proves Your Security Model Doesn't.
- Bitsight: OpenClaw Security Risks of Exposed AI Agents
- Help Net Security: OpenClaw Scanner
- Trend Micro: What OpenClaw Reveals About Agentic Assistants
- OpenClaw: Security Documentation