The ClawdBot Timeline: When Innovation Meets Exposure

In the span of three weeks, an open-source AI assistant went from obscurity to 100,000 GitHub stars, sparked a trademark dispute with a major AI company, exposed thousands of credential leaks across the internet, and inadvertently launched a social network where AI agents created their own religion.

Welcome to January 2026, where the gap between "innovative AI tool" and "security incident" has collapsed to mere days. This is the ClawdBot story - a compressed timeline that should concern anyone deploying AI tools in sensitive environments.

The Timeline: A Week-by-Week Breakdown

Late 2025 - Project Launch

  • Peter Steinberger (Austrian developer, PSPDFKit founder) launches ClawdBot as a self-hosted AI assistant
  • The tool integrates with messaging platforms (WhatsApp, Telegram, Slack, Discord, etc.) and offers full system access

Early January 2026 - Viral Explosion

  • ClawdBot gains 9,000 GitHub stars in 24 hours
  • Reaches 60,000+ stars within days, becoming one of the fastest-growing open-source projects ever
  • Andrej Karpathy (former Tesla AI director) publicly praises the project
  • Mac Mini buying frenzy begins as users purchase dedicated hardware to run ClawdBot

January 20-25, 2026 - Security Researchers Sound Alarms

  • Jamieson O'Reilly (Dvuln founder) discovers hundreds of exposed ClawdBot instances via Shodan scans
  • Researchers find unauthenticated admin panels accessible from the internet
  • Multiple instances leaking API keys, OAuth credentials, chat histories, and allowing command execution
  • O'Reilly demonstrates supply-chain attack by uploading malicious "skill" to ClawdHub - 16 developers across 7 countries download it in 8 hours
  • Matvey Kukuy demonstrates prompt injection attack via email - ClawdBot forwards user's last 5 emails to attacker in 5 minutes

January 27, 2026 - The Trademark Drama

  • Anthropic sends trademark request demanding name change ("Clawd" too similar to "Claude")
  • Steinberger announces rebrand to "Moltbot" (lobsters molt to grow)
  • During rebrand process, crypto scammers hijack original GitHub org and X handle
  • Fake cryptocurrency tokens launched under ClawdBot name
  • Community backlash against Anthropic intensifies - seen as "customer hostile"

January 27-28, 2026 - Security Crisis Escalates

  • Multiple security firms publish warnings:
    • The Register, BleepingComputer, Snyk all cover vulnerabilities
    • Straiker identifies 4,500+ exposed instances globally (US, Germany, Singapore, China)
    • Hudson Rock warns that info-stealing malware (RedLine, Lumma, Vidar) targeting Moltbot storage
  • Token Security reports 22% of enterprise customers have employees using Moltbot without IT approval
  • Google Cloud VP Heather Adkins issues stark warning: "Don't run Clawdbot"
  • Malicious VS Code extension discovered impersonating ClawdBot, installs ScreenConnect RAT

January 28, 2026 - Moltbook Launches

  • Matt Schlicht (Octane AI CEO) launches Moltbook - social network exclusively for AI agents
  • Built by Schlicht's AI assistant "Clawd Clawderberg" (named after Zuckerberg)
  • Within 48 hours: 2,100+ AI agents, 10,000+ posts across 200 communities

January 29, 2026 - Separate CISA Incident

  • Madhu Gottumukkala (CISA Acting Director) uploads sensitive government files to public ChatGPT
  • Triggers automated security alerts for potential data exposure
  • Internal review launched to determine security impact
  • Editors Note: The timeline reflects when the issue when public. The actual incident was in the summer of 2025.

January 30, 2026 - Story Continues to Develop

  • Karpathy tweets about Moltbook as "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently"
  • Moltbook reaches 37,000+ AI agents, 1 million+ human visitors
  • AI agents on Moltbook create "Crustafarianism" religion, develop complex social behaviors
  • Steinberger announces final rebrand to "OpenClaw" (Moltbot name "never grew on him")
  • Project reaches 100,000+ GitHub stars
  • Cloudflare launches "Moltworker" cloud hosting solution

Key Numbers:

  • 4,500+ exposed instances found globally
  • 780+ instances exposing admin panels via Shodan
  • 22% of enterprises have shadow IT Moltbot deployments
  • 100,000+ GitHub stars in ~3 weeks
  • 3 name changes in 72 hours (ClawdBot → Moltbot → OpenClaw)

Practitioner Notes

  • The 72-Hour Window - Security issues went from discovery to widespread exploitation in under a week. Healthcare's traditional patch/review cycles aren't built for this velocity.
  • Shadow AI is Already Here - 22% of enterprises had employees running Moltbot without IT approval. How many clinical staff are experimenting with AI tools on their own devices right now?
  • The "Localhost Trust" Problem - ClawdBot's assumption that local connections are trusted broke catastrophically behind reverse proxies. Many healthcare apps make similar assumptions.
  • Prompt Injection is Real - The email-based attack that exfiltrated data in 5 minutes isn't theoretical anymore. Any AI agent reading emails or processing external content is vulnerable.
  • Audit Trails Don't Exist Yet - Exposed instances had months of chat history and executed commands, but no clear ownership or approval records. When AI agents access PHI, HIPAA requires defensible audit trails. Current tools don't provide them.

Closing Thoughts

Here's my two cents on the craziness of this past week. Being a Technologist, Security Analyst, AND an EcoRealist, all of this is making for conflicting thoughts.

As a Technologist, I'm marveling at the speed at which this is happening and how much it could help in the healthcare industry.

As a Security Analyst, I'm definitely alarmed at people not understanding even the basic principles of information security and how much they're putting themselves at risk.

As an EcoRealist, short term we're going to have to balance power needs with the 'harm' that comes along with it. Long term, even certain political parties are going to have to understand we're going to need to invest in alternative energy solutions to not only meet the demands but make sure we're doing it in a sustainable way.

In the end I do know one thing for sure: those of us in the information security space are going to have to move much quicker to solve the AI Identity and Access Controls issues coming from this

Appendix: References & Sources

Primary Security Research & Analysis

  1. The Register - "Clawdbot becomes Moltbot, but can't shed security concerns"
  2. BleepingComputer - "Viral Moltbot AI assistant raises concerns over data security"
  3. Snyk - "Your Clawdbot (Moltbot) AI Assistant Has Shell Access and One Prompt Injection Away from Disaster"
  4. Straiker - "How the Clawdbot/Moltbot AI Assistant Becomes a Backdoor for System Takeover"
  5. SOC Prime - "Moltbot Risks: Exposed Admin Ports and Poisoned Skills"
  6. Bitdefender - "Moltbot security alert exposed Clawdbot control panels risk credential leaks and account takeovers"

Project History & Context

  1. Wikipedia - OpenClaw
  2. Wikipedia - Moltbook
  3. DEV Community - "From Clawdbot to Moltbot: How a C&D, Crypto Scammers, and 10 Seconds of Chaos Took Down the Internet's Hottest AI Project"
  4. TechCrunch - Coverage of Moltbot developments

CISA Incident Coverage

  1. The National (UAE) - "Cyber hygiene: Did Trump's cyber director compromise US security by using ChatGPT?"
  2. Cybernews - "Trump's CISA chief at it again: uploads sensitive files into ChatGPT"

Moltbook Development

  1. NBC News - "Humans welcome to observe: This social network is for AI agents only"
  2. IBTimes UK - "AI Agents Are Autonomously Building Their Own Social Network and It's More Chilling Than Exciting"

Additional Technical Analysis

  1. Malwarebytes - "Clawdbot's rename to Moltbot sparks impersonation campaign"
  2. Cloudflare Blog - "Introducing Moltworker: a self-hosted personal AI agent"
  3. Pulumi Blog - "Deploy Moltbot on AWS or Hetzner Securely with Pulumi and Tailscale"

Key Researchers & Industry Voices Cited

  • Jamieson O'Reilly - Founder, Dvuln (red-teaming company)
  • Matvey Kukuy - Security researcher (prompt injection demonstrations)
  • Heather Adkins - VP of Security Engineering, Google Cloud
  • Andrej Karpathy - Former Tesla AI Director, OpenAI founding member
  • Peter Steinberger - Creator of ClawdBot/Moltbot/OpenClaw, PSPDFKit founder