IBM X-Force 2026: AI Is Accelerating Attacks, But the Real Problem Is Security Basics

IBM X-Force 2026: AI Is Accelerating Attacks, But the Real Problem Is Security Basics

AI Security Series #23

IBM released its 2026 X-Force Threat Intelligence Index yesterday, and the headline finding isn't about sophisticated new attack techniques. It's about how AI is helping attackers exploit the same gaps organizations have failed to close for years.


The core message from IBM's Mark Hughes: "Attackers aren't reinventing playbooks, they're speeding them up with AI. The core issue is the same: businesses are overwhelmed by software vulnerabilities. The difference now is speed."

That's the uncomfortable truth in this year's report. The threats aren't new. Organizations just haven't fixed the basics — and now AI is making that negligence exponentially more dangerous.

The Numbers

The 2026 report is built on IBM X-Force's incident response data from 2025. Here are the key statistics:

MetricFinding
Increase in attacks exploiting public-facing applications44% year-over-year
Incidents caused by vulnerability exploitation40% of all cases (now the leading cause)
Vulnerabilities exploitable without authentication56% (unchanged for 3 years)
New vulnerabilities reported~40,000 (up 13,000 from prior year)
Increase in active ransomware/extortion groups49% (109 groups, up from 73)
Increase in supply chain compromises since 2020Nearly 4x
ChatGPT credentials exposed on dark web300,000+
Most targeted industryManufacturing (27.7%, fifth consecutive year)
Most attacked regionNorth America (29%, up from 24%)

The Unauthenticated Vulnerability Problem

One statistic stands out: 56% of tracked vulnerabilities could be exploited without authentication. And this number hasn't changed in three years.

Think about what that means. More than half of the vulnerabilities attackers are exploiting don't require stolen credentials, phishing, or social engineering. They just require finding the vulnerable endpoint.

Jeff Crume from IBM illustrates this in his video summary with an example: an application server that allows unauthenticated users to upload arbitrary files, leading to remote code execution. No credentials needed. No human to trick. Just scan, find, exploit.

Now add AI to the equation. Attackers are using AI to accelerate vulnerability research, analyze large datasets, and iterate on attack paths in real time. What used to take days of manual reconnaissance can now happen in hours. The window between vulnerability disclosure and exploitation is shrinking — and organizations that were already behind on patching are now critically exposed.

AI Is the Accelerant, Not the Fire

The report makes clear that AI isn't creating fundamentally new attack categories. It's making existing attacks faster, cheaper, and more scalable.

Ransomware Fragmentation

X-Force identified 109 distinct ransomware and extortion groups in 2025, up 49% from 73 the prior year. But this isn't because ransomware is more profitable — it's because the barriers to entry have collapsed.

Smaller, transient operators are reusing leaked tooling from established groups, following published playbooks, and using AI to automate portions of their operations. The dominance of the top 10 ransomware groups dropped by 25%. The ecosystem is fragmenting into smaller, harder-to-track operations.

For defenders, this means attribution is harder, threat intelligence is less predictive, and the long tail of attackers is growing.

TTP Convergence

The line between nation-state actors and financially motivated criminals continues to blur. Techniques that were once the exclusive domain of state-sponsored groups are now circulating on underground forums. North Korean actors are using infostealers — a tool traditionally associated with cybercriminals. Criminal groups are adopting reconnaissance techniques from APT playbooks.

AI accelerates this convergence. When attack techniques can be automated and packaged, they spread faster. The sophistication gap between state actors and criminal gangs is narrowing.

AI Platform Credentials Are Now High-Value Targets

Here's a finding that should concern anyone deploying enterprise AI: infostealer malware led to over 300,000 ChatGPT credentials being advertised on the dark web in 2025.

AI platforms have reached the same credential risk level as other core enterprise SaaS. But the risks are different. Compromised AI credentials don't just give attackers account access — they potentially expose conversation history, proprietary prompts, uploaded documents, and the ability to manipulate outputs or inject malicious prompts.

Password reuse between personal and enterprise accounts creates indirect attack paths. An employee's compromised personal ChatGPT account might contain work conversations, code snippets, or sensitive business context.

Supply Chain Attacks: The 4x Increase

Major supply chain and third-party compromises have increased nearly fourfold since 2020. Attackers are exploiting trusted developer identities, CI/CD pipelines, SaaS integrations, and downstream trust relationships.

The report notes that AI-powered coding tools are accelerating software creation — and occasionally introducing unvetted code. The pressure on development pipelines and open-source ecosystems is expected to grow in 2026.

This finding echoes what we saw with the Change Healthcare breach in 2024 — a single compromised vendor can cascade across an entire industry. Healthcare organizations often have dozens or hundreds of third-party integrations, each representing potential attack surface.

What This Means for Healthcare

Healthcare isn't called out as a top targeted industry in this report — manufacturing and financial services lead. But the trends IBM identifies hit healthcare particularly hard:

Unauthenticated Vulnerabilities in Patient-Facing Systems

Patient portals, telehealth platforms, and appointment scheduling systems are public-facing applications. The 44% increase in attacks exploiting these systems should prompt immediate questions: How many of our patient-facing endpoints allow unauthenticated access to functionality that should be protected? When was our last penetration test?

Medical Device and API Exposure

The 56% of vulnerabilities exploitable without authentication includes APIs — and healthcare is increasingly API-dependent. EHR integrations, medical device communications, and health information exchanges all rely on API security. Missing authentication controls on these endpoints is a direct path to PHI exposure.

AI Adoption Without Credential Governance

The 300,000+ exposed ChatGPT credentials are a warning. Healthcare organizations adopting AI tools need to treat those platforms with the same credential hygiene as their EHR systems. That means enterprise accounts with SSO, not personal accounts with password reuse. It means policies about what data can be entered into AI systems. It means monitoring for credential exposure on dark web marketplaces.

Supply Chain Risk Management

Healthcare's interconnected vendor ecosystem makes the 4x increase in supply chain attacks especially relevant. Third-party risk management isn't optional — it's a core security function. Organizations should be asking vendors about their CI/CD security, their developer identity controls, and their own third-party dependencies.

IBM's Recommendations

The report offers five key recommendations:

  1. Prepare for AI-accelerated attacks — Use AI-driven security tools (agentic AI, autonomous SOC capabilities) to match attacker speed
  2. Monitor human and non-human identities — Deploy AI-powered identity threat detection and response (ITDR) and identity security posture management (ISPM)
  3. Test and hunt for vulnerabilities continuously — Secure code review, credential audits, configuration checks, and regular penetration testing
  4. Prioritize AI platform security — Strong authentication, access controls, and monitoring for AI services
  5. Map your footprint — Identify exposures across surface, deep, and dark web with trusted partners

These are solid recommendations, but they all assume organizations have the basics in place. The report's core finding — that 56% of vulnerabilities don't require authentication and this hasn't changed in three years — suggests many organizations are still struggling with fundamentals like access control and patch management.

The Bigger Picture

The 2026 X-Force report confirms what many security practitioners have suspected: AI is changing the speed of attacks, not their nature. The vulnerabilities being exploited are the same ones we've been warned about for years. The difference is that attackers now have tools that find and exploit them faster than defenders can respond.

This creates a uncomfortable accountability moment. Organizations can't blame sophisticated new threats for breaches caused by missing authentication controls or unpatched systems. The attack surface hasn't fundamentally changed — the time available to address it has shrunk.

For healthcare specifically, this report should prompt two questions:

First, how confident are we that our public-facing applications require authentication for sensitive functionality? The 56% unauthenticated exploitation rate suggests many organizations would be surprised by the answer.

Second, are we treating AI platform credentials with the same rigor as other enterprise systems? The 300,000+ exposed ChatGPT credentials suggest the industry hasn't caught up to this risk.

The threats are accelerating. The fundamentals haven't changed. The gap between those two realities is where breaches happen.


This is entry #23 in the AI Security series. For the video summary of this report, see Jeff Crume's IBM X-Force 2026 Threat Intelligence Index overview.


Key Links