← All posts

OpenClaw's Creator Got Hired by OpenAI. Its Security Problems Got Hired by Everyone Else.

He built an AI agent in an hour and got 247K GitHub stars. Then he left. The 512 vulnerabilities and 135,000 exposed instances stayed behind.

Hooded figure in a black cyberpunk mask with circuit-line patterns standing in a rain-soaked neon alley

On February 14, 2026, Peter Steinberger announced he was joining OpenAI. He handed his open-source project, OpenClaw, to a foundation and moved on.

By that point, OpenClaw had 247,000 GitHub stars, companies in Silicon Valley and China were integrating it, a Chinese government agency was drafting policy to support it, and it was being called the future of personal AI.

It also had 512 known vulnerabilities (eight critical), 135,000 publicly exposed instances across 82 countries, plaintext credential storage that three malware families were already targeting, and no dedicated security team.

Steinberger got hired. The security problems stayed behind. Here's how it happened.

The one-hour project

Steinberger is not some random developer. He founded PSPDFKit, a document SDK used by major enterprises. In November 2025, he connected a messaging app to Claude and built a personal AI assistant. It took about an hour.

You text it on WhatsApp, Telegram, Signal or Discord. It texts you back. But instead of just chatting, it does things. It checks your email. Runs terminal commands. Manages your calendar. Controls your smart home. Browses the web. Makes purchases. All powered by an LLM under the hood.

It's the AI assistant sci-fi promised. You tell it what you want in plain English and it figures out the rest. No apps to open, no buttons to tap. People loved it. On January 25, 2026, Steinberger made it public. It got 9,000 GitHub stars in a single day.

Then everything happened at once

Two days after launch, Anthropic's lawyers sent a trademark notice. "Clawdbot" was too close to "Claude." Steinberger announced a rename to Moltbot.

During the roughly ten-second window where he was switching the @clawdbot Twitter handle to @moltbot, scammers grabbed it. They had monitors running. The moment the handle dropped, they launched a fake $CLAWD token on Solana using the stolen account for credibility. It pumped to $16 million before crashing when Steinberger disavowed it. A lot of people lost money.

The Moltbot name didn't stick either. By January 30: OpenClaw. By February 2, CNBC was covering it. By February 14, Steinberger announced he was joining OpenAI and handing the project to an open-source foundation.

Three names. One crypto scam. A jump to a competitor. All within three weeks. One of the project's own maintainers warned on Discord: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely."

What the security researchers found

While the stars piled up, researchers were looking at the actual code. What they found was bad, and it kept getting worse.

Plaintext credentials. OpenClaw stored API keys and passwords in plain-text JSON and Markdown files at ~/.clawdbot. No encryption. No hashing. Deleted keys stuck around in backup files that never got cleaned up. This was bad enough that RedLine, Lumma and Vidar (three of the most common credential-stealing malware families) added OpenClaw file paths to their target lists within weeks.

Unauthenticated gateway. The tool bound to 0.0.0.0:18789 by default, meaning it listened on every network interface including the public internet. Authentication was off by default. Anyone who could reach that port could send commands to your AI agent. SecurityScorecard found 135,000+ exposed instances across 82 countries, with 15,000+ vulnerable to remote code execution.

Malicious skills. The OpenClaw skill marketplace (ClawHub) had no vetting process. Cisco's AI security team tested a third-party skill and found it performed data exfiltration and prompt injection without the user knowing. A researcher demonstrated that he could upload a skill, inflate its download count past 4,000, and reach 16 developers in seven countries within eight hours.

Delayed prompt injection. Palo Alto Networks flagged something worse than the immediate vulnerabilities. Because OpenClaw has persistent memory across sessions, a malicious instruction hidden in a forwarded WhatsApp message could sit in the agent's context for weeks, then activate later. Most safety tools can't detect multi-turn delayed attacks.

The total count by mid-February: 40+ vulnerabilities patched in a single update, six additional CVEs published by Endor Labs, and a Kaspersky audit that found 512 vulnerabilities, eight of them critical.

Chatbots are not agents

This is the part that matters beyond the OpenClaw story.

When ChatGPT has a security flaw, someone might see your conversation history. Bad, but limited. A chatbot reads your text and writes a response. It doesn't do anything in the real world.

Agents are different. They act. They execute code. They send emails on your behalf. They make API calls, manage files and interact with other services. When an agent has a security flaw, someone isn't reading your messages. They're controlling your computer.

Think about what OpenClaw had access to. Your email. Your calendar. Your terminal. Your smart home. Your files. Now imagine a stranger with the same access. That's what 135,000 unauthenticated instances on the open internet meant in practice.

The gap between "chatbot security" and "agent security" is enormous. With a chatbot, the worst case is a data leak. With an agent, the worst case is full remote control of your digital life. Most AI agents today are being built with chatbot-level security thinking. Or less.

The pattern

OpenClaw isn't a one-off. It's a template. A talented developer builds something genuinely impressive. It goes viral. Hundreds of thousands of people install it. The creator gets rewarded. And the security problems become everyone else's to deal with.

Steinberger is at OpenAI now. OpenClaw is in the hands of a foundation with no dedicated security team. Companies are building on it. A Chinese government agency is drafting support policy for it. Malware families are targeting its credential files. All of these things are happening at the same time.

There's no standard for how an AI agent should store credentials. No agreed-upon authentication model for agent-to-service communication. No framework for limiting what an agent can do if it gets compromised. The entire security model for AI agents is "hope someone thought about it."

The agent era is here. The security for it left for OpenAI.

BeatMask catches credentials, API keys and sensitive data before it reaches AI tools. On your device, before anything enters an agent's context.

AI Security AI Agents OpenClaw Open Source Prompt Security