← All posts

Clawdbot Got 247K Stars. It Also Stored Your Passwords in Plaintext.

An Austrian developer built an AI assistant in an hour. Two months later it had a trademark fight, a crypto scam, and a CVE. Welcome to the agent era.

Luchadores grappling in an outdoor ring

In November 2025, Peter Steinberger had an idea. The Austrian developer (best known for founding PSPDFKit) connected a messaging app to Claude Code and built a personal AI assistant. It took about an hour.

Two months later, the project had 247,000 GitHub stars. It also had a trademark dispute with Anthropic, a $16 million crypto scam, CNBC coverage, a CVE number, and 900+ publicly exposed instances leaking data to the open internet.

This is the story of Clawdbot. It tells you everything you need to know about where AI agents are headed, and why nobody's ready for it.

What Clawdbot actually does

The concept is simple. You text it on WhatsApp (or Telegram, Signal, or Discord). It texts you back. But instead of just chatting, it does things. It checks your email. Runs terminal commands. Manages your calendar. Controls your smart home. Makes purchases. All powered by Claude under the hood.

It's the AI assistant that sci-fi promised us. You tell it what you want in plain English, and it figures out how to make it happen. No apps to open, no buttons to tap. Just a conversation that actually gets stuff done.

People loved it. On January 25, 2026, Steinberger made it public. It got 9,000 GitHub stars in a single day.

Then everything got weird

Two days after launch, Anthropic's lawyers sent a trademark notice. "Clawdbot" was too similar to "Claude." Fair enough. Steinberger announced a rename to Moltbot.

Here's where it gets good. During the roughly ten-second window when Steinberger was switching the @clawdbot Twitter handle to @moltbot, scammers grabbed it. They immediately launched a $CLAWD Solana token using the stolen handle for credibility. It pumped to a $16 million market cap before anyone could stop it.

Ten seconds. Sixteen million dollars. The internet is undefeated.

The Moltbot name didn't stick either. By January 30, another rename: OpenClaw. By February 2, CNBC was covering it. 145,000 stars. 20,000 forks. And then on February 14, Steinberger dropped the real plot twist. He was joining OpenAI. The project got handed off to an open-source foundation.

Three names. One crypto scam. A jump to a competitor. All within three weeks of launch. You couldn't write this stuff.

The part nobody was talking about

While the GitHub stars piled up and the drama played out on Twitter, security researchers were looking at the actual code. What they found wasn't great.

Plaintext credentials. Clawdbot stored API keys and passwords in a plain-text file at ~/.clawdbot. No encryption. No hashing. Just your passwords sitting in a folder. Even deleted keys stuck around in .bak files that never got cleaned up.

Unauthenticated WebSocket. The tool used a WebSocket connection on port 18789 with zero authentication. Anyone who could reach that port could send commands to your agent. Click a malicious link, and an attacker could tell your AI assistant to exfiltrate your data. This got its own CVE number: CVE-2026-25253.

Publicly exposed instances. Researchers found over 900 instances running on the open internet within seconds of scanning. Nine hundred personal AI assistants with access to their owners' email, files, and terminals. Wide open. No authentication required.

Palo Alto Networks called it a "lethal trifecta": access to private data, exposure to untrusted content, external communication capability, plus persistent memory. Everything an attacker needs, wrapped in a friendly chat interface. Kaspersky published warnings. XDA told people to think twice before installing it.

This is a tool that can read your email, run code on your machine, and control your smart home. And for weeks, it was protected by nothing.

Chatbots are not agents

This is the part that matters beyond the Clawdbot story.

When ChatGPT has a security flaw, someone might see your conversation history. That's bad, but the damage is limited. A chatbot reads your text and writes a response. It doesn't do anything in the real world.

Agents are fundamentally different. They don't just talk. They act. They execute code. They send emails on your behalf. They make API calls, manage files, and interact with other services. When an agent has a security flaw, someone isn't just reading your messages. They're controlling your computer.

Think about what Clawdbot had access to. Your email inbox. Your calendar. Your terminal. Your smart home devices. Your files. Now imagine a stranger with the same access. That's what an unauthenticated WebSocket means in practice.

The gap between "chatbot security" and "agent security" is enormous. With a chatbot, the worst case is a data leak. With an agent, the worst case is full remote control of your digital life. Yet most AI agents today are being built with chatbot-level security thinking. Or less.

The pattern we should worry about

Clawdbot isn't an isolated case. It's a template. A talented developer builds something cool. It goes viral. Hundreds of thousands of people install it. And nobody checks the security until after the damage is done.

This will happen again. It's probably happening right now with a dozen other AI agent projects on GitHub. The tools for building agents are getting easier every month. The security practices for running them haven't caught up. Not even close.

There's no standard for how an AI agent should store credentials. No agreed-upon authentication model for agent-to-service communication. No framework for limiting what an agent can do if it gets compromised. The entire security model for AI agents is basically "hope the developer thought about it."

Most of them haven't. Not because they're careless, but because the field is brand new. Steinberger built Clawdbot in an hour. It's genuinely impressive that it worked at all. But "it works" and "it's secure" are very different bars, and the second one takes a lot longer to clear.

Chapter one

AI agents are coming whether we're ready or not. They'll be amazing. The idea of texting an AI that handles your email, manages your schedule, and runs your errands? That future is real and it's close.

But the security model for that future barely exists yet. We're building the most powerful personal tools in the history of computing, and we're protecting them with plaintext password files and open WebSockets.

Clawdbot went from zero to 247,000 stars in six weeks. It earned a CVE in even less time. It got renamed three times, spawned a crypto scam, and its creator left for OpenAI. The whole thing reads like satire. But it's not. It's just the beginning.

The agent era is here. The security for it isn't. That's the real story.

AI Security AI Agents OpenClaw