The Claude Code Leak: What Healthcare Development Teams Need to Know
AI Security Series #32
On March 31, 2026, Anthropic accidentally shipped the entire source code for Claude Code to the public npm registry. Within hours, 512,000 lines of code spread across GitHub in thousands of mirrors. By the time Anthropic pulled the package, security researchers had already extracted the complete codebase, exposing not just the current implementation but a detailed roadmap of unreleased features and internal model performance data.
For healthcare organizations using Claude Code to develop clinical applications, billing systems, or EHR integrations, this incident raises immediate questions about supply chain security, vendor trust, and the operational security posture of AI companies positioning themselves as "safety-first" alternatives. The leak itself wouldn't necessarily compromise deployed healthcare systems, but the concurrent timing with a separate supply chain attack on a critical dependency creates a window of exposure that healthcare development teams need to address immediately.
What Happened: The Technical Timeline
The leak originated from a single misconfigured build pipeline. When Anthropic published Claude Code version 2.1.88 to the npm registry on March 31, the package included a source map file—a debugging tool that bridges minified production code back to readable source files. The map file, weighing 57-59 megabytes, contained references to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket. That archive held the unobfuscated TypeScript source code.
Security researcher Chaofan Shou discovered the exposure Tuesday morning and posted about it on X. Within hours, the codebase was mirrored on GitHub. The repository accumulated more than 41,500 forks before Anthropic began sending copyright takedown notices. The leak comprised approximately 1,900 TypeScript files spanning the core engine for LLM API calls, streaming response handling, tool-call orchestration, permission models, retry logic, token counting, and the memory management system Anthropic refers to internally as "dreaming."
Source maps are standard development artifacts, but they're meant for debugging obfuscated code during development, not for inclusion in production releases. Publishing a source map to a public package registry is a configuration error—typically a single line in `.npmignore` or the `files` field in `package.json` that excludes debugging artifacts from the published bundle. That one configuration oversight exposed intellectual property Anthropic spent years and significant engineering resources developing.
Anthropic's official statement characterized the incident as "a release packaging issue caused by human error, not a security breach." The company confirmed that no sensitive customer data or credentials were involved or exposed, and stated they're rolling out measures to prevent recurrence. That framing is technically accurate—this wasn't an external attack or intrusion. But the distinction between "packaging error" and "security breach" provides limited comfort to healthcare organizations evaluating whether to trust AI vendors with access to protected health information and clinical workflows.
The Concurrent Axios Supply Chain Attack
The Claude Code leak occurred against the backdrop of a separate, more immediately dangerous security event. On March 31, between 00:21 and 03:29 UTC, malicious versions of the axios HTTP client library—a widely used JavaScript package for making API calls—were published to the npm registry. Axios is a dependency of Claude Code. The compromised versions (1.14.1 and 0.30.4) contain a cross-platform remote access trojan that establishes backdoor access to infected systems.
Anyone who installed or updated Claude Code via npm during that three-hour window may have pulled in the trojanized axios package. The timing is particularly concerning because it creates a confluence of two distinct attack vectors. Developers investigating the Claude Code leak and attempting to compile the source code from the exposed materials would have been actively running npm install commands during exactly the period when the malicious axios versions were available.
Healthcare development teams need to audit their environments immediately. Check `package-lock.json`, `yarn.lock`, or `bun.lockb` for axios versions 1.14.1 or 0.30.4, or the dependency `plain-crypto-js`. If these appear in your lockfiles, the system is potentially compromised. Rotate all credentials that were accessible to that environment, including API keys, database passwords, and service account tokens. Downgrade axios to a known-safe version and rebuild from clean dependencies.
The axios attack was not coordinated with the Claude Code leak—these were independent incidents from different threat actors. But the overlap creates a supply chain security scenario that healthcare organizations must take seriously. An attacker who gained access via the malicious axios package could theoretically combine that foothold with knowledge from the leaked Claude Code source to craft targeted attacks against systems using the tool.
What the Leak Exposed: Beyond the Current Implementation
The leaked source code reveals not just how Claude Code works today, but what Anthropic is building for tomorrow. Embedded in the codebase are 44 feature flags—toggles that control unreleased capabilities. These aren't speculative roadmap items or vaporware. They're fully implemented features sitting behind compilation flags that evaluate to false when Anthropic ships external builds. The flags reveal a clear product direction: longer autonomous tasks, deeper memory, persistent background operation, and multi-agent collaboration.
Auto Dream is the memory consolidation system that processes accumulated session data during idle periods. The system reviews memory files, merges overlapping observations, removes logical contradictions, and converts relative timestamps into absolute dates. This runs between coding sessions rather than during active work, mimicking the memory consolidation that occurs during human sleep. The leaked code confirms Auto Dream is production-ready and likely already enabled for some users despite not being publicly announced.
KAIROS—referenced more than 150 times throughout the source—represents autonomous daemon mode. The name derives from the Ancient Greek concept of "the opportune moment." Where current AI coding tools are largely reactive, KAIROS enables Claude Code to operate as an always-on background agent, handling asynchronous tasks and maintaining persistent state across disconnections. This mode is what enables the "assign a task from your phone, come back to finished work on your computer" workflows Anthropic has been promoting.
The leak also exposed internal model performance data that Anthropic clearly did not intend to share publicly. Comments in the code reveal that Capybara v8—an internal variant of Claude 4.6—exhibits a 29-30% false claims rate, which represents a regression from the 16.7% rate observed in Capybara v4. The code includes references to an "assertiveness counterweight" designed to prevent the model from becoming too aggressive in refactoring suggestions. These metrics provide competitors with precise benchmarks of current agentic AI performance ceilings and highlight specific weaknesses around over-commenting and factual accuracy that Anthropic is actively trying to solve.
For healthcare development teams, this performance data has practical implications. A 30% false claims rate in an AI coding assistant means that roughly one in three times the agent confidently states a fact about the codebase, deployment environment, or API behavior, that statement is incorrect. Healthcare applications dealing with clinical logic, PHI handling, or regulatory compliance cannot tolerate that error margin without extensive human review. The leak makes clear that even Anthropic's most advanced internal models still require significant human oversight for high-stakes code.
Healthcare-Specific Risk Assessment
Healthcare organizations face unique risks from this incident that extend beyond the immediate supply chain concerns. The leaked source code reveals the exact permission model Claude Code uses to gate access to sensitive operations. Attackers can now study that logic to design malicious prompts specifically tailored to bypass safeguards. The code shows how Claude Code detects and handles potential prompt injection attempts, what regex patterns it uses to filter negative sentiment, and how it decides whether to request explicit user permission before taking actions.
That knowledge enables sophisticated social engineering. An attacker who understands the permission model can craft repository structures, documentation files, or code comments designed to manipulate Claude Code's behavior in ways that appear benign but lead to data exfiltration or unauthorized access. For healthcare development teams working on systems that handle protected health information, this creates a new threat vector. Malicious actors could embed instructions in project files—README documents, configuration files, code comments—that exploit Claude Code's orchestration logic to extract PHI or credentials before triggering any trust prompt that would alert the developer.
The leak also exposed Claude Code's approach to handling background tasks and autonomous operation. The code confirms that when operating in certain modes, Claude Code can execute long-running workflows without continuous user supervision. For healthcare contexts, this raises audit trail questions. HIPAA requires tracking who accessed what patient data and when. If an autonomous AI agent accesses electronic health records as part of a debugging session or feature development workflow, how does that access get logged? Does it attribute to the developer who initiated the session? To the AI system? To the organization operating the AI?
The current regulatory framework doesn't provide clear answers because it was written before autonomous AI agents became viable development tools. Healthcare organizations deploying Claude Code need to establish their own policies for how agent-initiated data access maps to their existing audit and compliance requirements. The leaked source code makes clear that Claude Code is capable of operating with significant autonomy—the question is whether healthcare compliance frameworks are ready for that operational model.
Lessons About AI Vendor Security Posture
This is Anthropic's second major exposure in a single week. Days before the Claude Code leak, Fortune reported that Anthropic had accidentally made approximately 3,000 internal files publicly accessible via its content management system, including draft blog posts describing an upcoming model with "unprecedented cybersecurity risks." The company acknowledged the exposure and confirmed it's testing the new model—codenamed Mythos or Capybara—with early access customers.
Two significant leaks in seven days suggests a pattern rather than an isolated incident. Anthropic positions itself as the "safety-first AI lab," emphasizing responsible AI development and robust security practices as key differentiators from competitors. The operational security lapses undermine that positioning. Healthcare organizations evaluating AI vendors need to distinguish between model safety—the work Anthropic does to ensure Claude doesn't produce harmful outputs—and operational security—the controls that prevent unauthorized access to internal systems, source code, and customer data.
These are different competencies requiring different expertise. A company can excel at AI safety research while struggling with basic operational security hygiene like build pipeline configuration and access control for internal file systems. Healthcare security teams should assess AI vendors across both dimensions rather than assuming that "safety-focused" automatically translates to "security-mature."
The specific nature of these errors—a misconfigured build script, publicly accessible file storage—are not sophisticated attacks exploiting zero-day vulnerabilities. They're configuration oversights and process gaps. That's actually more concerning from a healthcare procurement perspective than a targeted breach would be. It suggests that Anthropic's operational security processes lack the rigor healthcare organizations require from vendors who will interact with protected health information or clinical systems.
Healthcare organizations should be asking vendors specific questions: What controls govern source code publication? How do you test release artifacts before public distribution? What access controls protect internal documentation and file systems? Who reviews configuration changes to build pipelines? How do you validate that debugging artifacts are excluded from production releases? The Claude Code leak demonstrates that these aren't theoretical concerns—they're operational realities that can expose intellectual property and create downstream security risks for customers.
The Broader Supply Chain Security Context
The Claude Code incident fits into a larger pattern of supply chain attacks targeting the JavaScript and npm ecosystem. The concurrent axios compromise is not an isolated event. In 2026 alone, there have been multiple high-profile incidents where popular npm packages were either compromised by attackers or contained malicious code introduced by maintainers. The Python ecosystem has seen similar issues with PyPI packages.
Healthcare development teams building on these ecosystems need defense-in-depth strategies that don't assume package registries are trustworthy. That means dependency pinning—locking to specific known-good versions rather than accepting automatic updates. It means auditing dependencies regularly for unexpected changes or suspicious new packages. It means maintaining software bills of materials that track every component in deployed systems so that when a supply chain compromise is discovered, you can quickly determine exposure.
The axios attack demonstrates why this matters. If a healthcare organization automatically updates dependencies without review, a three-hour window of exposure to a malicious package could compromise development environments, extract credentials, and establish persistent backdoor access. The blast radius extends beyond the initially infected machine to anything those compromised credentials can access—potentially including production systems, patient databases, or clinical applications.
Some healthcare organizations have responded to supply chain risks by implementing private package registries that mirror public repositories but add security scanning and approval workflows before making packages available internally. This approach introduces latency—new packages and updates take longer to become available—but provides a control point where security teams can detect malicious code or suspicious version updates before they reach developer machines.
The tradeoff is worth considering for healthcare contexts. The axios incident unfolded over three hours. Organizations with private registries that sync daily would have been protected because the malicious versions would never have made it through the security review process. Organizations pulling directly from the public npm registry in real-time had no buffer—if a developer ran `npm update` during that window, they were exposed.
Typosquatting and Dependency Confusion Attacks
Within hours of the Claude Code leak, attackers began exploiting it to launch dependency confusion attacks. A user named "pacifier136" published multiple npm packages with names matching internal package identifiers found in the leaked source code. These packages are currently empty stubs containing only `module.exports = {}`, but that's standard attack methodology. Squat the namespace, wait for downloads, then push a malicious update that affects everyone who installed the stub.
Developers attempting to compile the leaked Claude Code source code would naturally try to install dependencies, including internal packages they see referenced but can't find on the public registry. Typosquatted package names exploit that confusion. When a developer runs `npm install` expecting the build to fail if a package is missing, but instead the install succeeds because a malicious stub filled the namespace, the malicious package gets installed without raising suspicion.
Healthcare development teams need to educate developers about this attack pattern. If you're working with code from untrusted sources—including leaked source code—run builds in isolated environments with network restrictions that prevent arbitrary package installation. Use dependency lockfiles to ensure that builds pull only explicitly approved packages. Monitor package-lock.json for unexpected additions that might indicate a typosquatting package slipped into your dependency tree.
The broader lesson is that source code leaks create second-order attack opportunities. The leak itself might not directly compromise systems, but attackers will use the exposed information to craft targeted attacks. Healthcare organizations should assume that any significant source code exposure for tools they use will be followed by targeted phishing, typosquatting, and social engineering campaigns designed to exploit developer knowledge of the leaked code.
What Healthcare Development Teams Should Do Now
If your organization uses Claude Code, take immediate action to assess exposure to the axios supply chain attack. Check all systems where Claude Code was installed or updated on March 31, 2026, between 00:21 and 03:29 UTC. Inspect dependency lockfiles for axios versions 1.14.1, 0.30.4, or the dependency plain-crypto-js. If found, treat the system as compromised. Rotate all credentials accessible from that environment, scan for indicators of persistence, and rebuild from known-clean images.
Update Claude Code to version 2.1.89 or later using the official installer rather than npm. Anthropic has published a native installation script at https://claude.ai/install.sh that bypasses the npm ecosystem entirely for Claude Code installation. For healthcare environments, using the native installer reduces dependency on npm supply chain integrity.
Review your organization's AI vendor assessment criteria to include operational security maturity as a distinct evaluation dimension from AI safety capabilities. Ask vendors about their release engineering practices, build pipeline security, access controls for internal systems, and incident response processes. Anthropic's statement that this was "human error, not a security breach" is technically accurate but provides limited assurance that similar errors won't recur. Healthcare organizations should expect vendors to demonstrate mature processes that prevent configuration errors from resulting in intellectual property or data exposure.
Consider implementing private package registries or mirrors with security review workflows for dependencies used in healthcare application development. The axios incident demonstrates that a three-hour exposure window in the public npm registry can introduce critical vulnerabilities. Healthcare organizations have different risk tolerance than general software development—the potential impact of a compromised development environment includes PHI exposure and clinical system disruption. Private registries introduce latency but provide control and visibility that may be worth the operational overhead.
Update your supply chain security policies to explicitly address AI coding assistants as part of the development toolchain. These tools run with developer credentials and can access source code, configuration files, API keys, and development databases. They're not just productivity enhancers—they're components of the development infrastructure that need security controls, monitoring, and audit trails. The Claude Code leak revealed that these tools have more autonomous capability than many organizations realize. Your security policies should account for that autonomy.
The Strategic Question: Open Source vs. Closed Source for Healthcare AI Tools
The Claude Code leak raises a question that healthcare organizations will face repeatedly as AI tools become embedded in clinical and administrative workflows: is it safer to use closed-source proprietary tools from vendors or open-source alternatives where the code is public by design?
The conventional wisdom is that closed source provides security through obscurity—attackers can't exploit what they can't see. The Claude Code leak demonstrates the weakness in that logic. Closed source only protects intellectual property and implementation details if the vendor successfully keeps the source confidential. When leaks occur—whether through configuration errors, insider threats, or external breaches—the entire security model collapses at once. Attackers gain detailed knowledge of the implementation without any of the community review that open-source projects benefit from.
Open-source AI tools have their own challenges. The code is public, so attackers don't need a leak to study the implementation. But that's also true for security researchers and the developer community. Vulnerabilities in open-source projects get discovered and patched through collaborative review. Closed-source tools rely entirely on the vendor's internal security processes. The Claude Code incident suggests those internal processes can have gaps.
For healthcare organizations, the calculus involves additional factors. Open-source tools can be audited directly—you can verify that the code handling protected health information implements appropriate security controls. Closed-source tools require trusting vendor claims about security capabilities. The leak revealed that Claude Code's permission model and guardrail logic are now publicly known. Healthcare organizations can audit that logic directly rather than relying on Anthropic's assurances. That's a strange silver lining—the leak inadvertently provided transparency that closed-source vendors normally don't offer.
Looking Forward: The Intellectual Property Arms Race
The competitive implications of the leak extend beyond immediate security concerns. Claude Code's run-rate revenue reached $2.5 billion as of February 2026, making it Anthropic's most commercially successful product. The source code exposure hands competitors a detailed blueprint for replicating a production-grade AI coding agent, including the memory management approach, autonomous task handling, and tool orchestration logic that took Anthropic years to develop.
OpenAI, Google, and xAI have been racing to build competing coding assistants. The leak removes the need to reverse-engineer capabilities through experimentation—they can study Anthropic's implementation directly. In a market where the underlying AI models are increasingly commoditized (multiple labs can train capable large language models), the competitive moat comes from the engineering around the model: how you orchestrate tool calls, manage context, handle errors, and maintain state across long-running tasks. The Claude Code leak exposed exactly that engineering.
Anthropic has responded with aggressive copyright takedown notices, initially targeting more than 8,000 copies on GitHub before narrowing to 96 after acknowledging overreach. But copyright enforcement faces practical limits. Developers have already used AI tools to rewrite Claude Code's functionality in other programming languages, creating derivative works that preserve the logic while changing the implementation. Those rewrites circulate on GitHub without triggering copyright claims because they're not direct copies of the original TypeScript.
The leaked code is permanently in the wild. Anthropic can pursue legal remedies against direct copying, but it cannot unring the bell. Competitors now have detailed knowledge of Claude Code's architecture whether Anthropic licenses it or not. For healthcare organizations, this creates a new normal: proprietary AI tools will leak, competitors will study those leaks, and the intellectual property protections that vendors claim as moats around their technology are more fragile than corporate messaging suggests.
The incident underscores why healthcare organizations should evaluate AI vendors on operational capabilities and customer support rather than assuming proprietary technology creates insurmountable competitive advantages. If the source code for a critical tool leaks tomorrow, does the vendor have the engineering capability to continue improving the product? Do they have the customer support infrastructure to help you respond to security incidents? Those operational strengths matter more than intellectual property when the IP can disappear in a configuration error.
Conclusion: Trust and Verify in the AI Vendor Ecosystem
The Claude Code leak won't sink Anthropic. The company closed a funding round valuing it at $380 billion and maintains strong enterprise relationships. But the incident provides healthcare organizations with a case study in the gap between vendor positioning and operational reality. "Safety-first" messaging doesn't automatically translate to mature security practices. Two significant leaks in one week suggest process failures that healthcare organizations should factor into vendor risk assessments.
For healthcare development teams using Claude Code, the immediate priorities are clear: check for axios compromise, update to a safe version using the native installer, and rotate credentials if exposure occurred. The longer-term work involves updating procurement processes to include operational security maturity as a distinct evaluation criterion, implementing supply chain security controls that assume public package registries can be compromised, and developing policies for how autonomous AI agents fit into existing audit and compliance frameworks.
The leak revealed capabilities that Anthropic hasn't publicly announced—autonomous background operation, persistent memory consolidation, multi-agent coordination. Those capabilities will shape how healthcare organizations use AI coding tools over the next year. They also raise questions about human oversight, audit trails, and accountability that healthcare compliance frameworks aren't yet designed to address. The regulatory gap won't close quickly. Healthcare organizations deploying these tools need to build their own guardrails rather than waiting for guidance that may take years to arrive.
The source code for Claude Code is now permanently public whether Anthropic intended that or not. Competitors will study it. Security researchers will analyze it for vulnerabilities. Healthcare organizations will make risk decisions based on what the leak revealed about Anthropic's engineering practices and security posture. That's the new reality. Trust in AI vendors requires verification now more than ever, because the tools we're being asked to trust with protected health information and clinical workflows are still maturing, and the companies building those tools are still learning how to protect them.
This is entry #32 in the AI Security series. For related coverage, see OWASP Top 10 for AI Agents and Context Engineering for Agentic AI.