Clawdbot to Moltbot: The 72-Hour Implosion of Open Source’s Fastest-Growing AI Project

Yesterday morning, Clawdbot was everywhere. Over 60,000 GitHub stars in under 48 hours. A Discord server buzzing with nearly 9,000 developers. MacStories calling it “the future of personal AI assistants.” People were joking that Mac Mini sales would spike because the tool needed 24/7 hardware to run. If you were anywhere near tech X, you couldn’t escape it.

ClawdBot Star History — Jan 27, 2026

Today? The project has been forced to rebrand. The old social media accounts are now controlled by crypto scammers pumping fake tokens. Security researchers have exposed hundreds of vulnerable installations leaking API keys and private conversations. And the creator, a retired founder who came back to “mess with AI,” is fighting fires on every front.

I’ve been following this story closely because, frankly, it’s one of the wildest 48-hour spirals I’ve seen in open source. What happened to Clawdbot touches on trademark law, platform dependency, crypto opportunism, and the uncomfortable security tradeoffs of giving AI agents real power over your computer. It’s a cautionary tale for anyone building in the AI space, and there’s a lot to unpack here.

What Made Clawdbot Special

Clawdbot wasn’t just another chatbot. Created by Peter Steinberger, the Austrian developer who founded PSPDFKit and sold it for approximately €100 million in 2021, it was an AI assistant that actually did things.

Unlike Siri or Alexa, which live on corporate servers and offer limited actions, Clawdbot ran locally on your own hardware. It connected to your existing messaging apps (WhatsApp, Telegram, Discord, Slack, iMessage) and could execute real tasks: manage emails, book flights, control smart home devices, run terminal commands, even build its own plugins when you asked for new features. This last part is what got people excited. An AI that could extend itself? That’s genuinely new territory.

It belongs to a category known as “agentic AI,” systems that take actions automatically instead of only responding to questions. This is what makes it both powerful and, as we’ll see, risky.

The project launched on January 26, 2026. It hit 9,000 stars within its first day. By the following morning, it had crossed 60,000, making it one of the fastest-growing open-source projects in GitHub history.

And then Anthropic stepped in.

The Trademark Problem

Here’s the twist that makes this story sting: Clawdbot was built to run on Claude.

Steinberger recommended Anthropic’s Claude Opus 4.5 in the documentation. Many users configured it specifically to use Anthropic’s API. The project was essentially free marketing, driving subscriptions and demonstrating real-world applications of Claude’s capabilities. You’d think Anthropic would be thrilled.

On January 27, less than two days after launch, Steinberger announced on X that Anthropic had raised trademark concerns. The name “Clawd” sounded too much like “Claude.”

Fair enough from a legal perspective. Trademark protection isn’t optional if you want to keep your brand. But the timing couldn’t have been worse, right at peak momentum, when every minute of disruption costs you users and goodwill.

Steinberger, to his credit, took it in stride: “Anthropic asked us to change our name (trademark stuff), and honestly? ‘Molt’ fits perfectly — it’s what lobsters do to grow.”

The rebrand narrative worked. Same lobster soul, new shell. Clawdbot would become Moltbot.

What nobody anticipated was how badly the transition itself would go.

The Moltbot rebrand: same lobster, new shell.

When Scammers Move Faster Than You

Here’s where things get ugly.

During the rename, the GitHub transition broke in unexpected ways. And in the brief window between releasing the old @clawdbot handle and securing the new identity, crypto scammers grabbed the accounts.

According to Yahoo Finance, mistakes with account migrations allowed third parties to squat on the related GitHub and X handles. Those accounts were then used to impersonate the project and promote crypto tokens as if they were officially linked.

Within hours, fake $CLAWD tokens appeared on Solana. The token briefly hit a $16 million market cap as traders piled in, thinking they were getting early access to an official AI coin. Then Steinberger publicly denied any involvement. The price cratered. Late buyers lost everything. The scammers walked away with millions.

Steinberger’s response was emphatic: “Stop DMing me. Stop harassing me. I will never launch a token. Any project listing me as a token owner is a scam. No, I won’t accept any fees. You’re harming this project.”

This is a man who already made his fortune. He built Clawdbot for the love of it, as a retirement project to “mess around with AI.” Now he was spending his time fighting crypto grifters instead of improving the tool. You can hear the frustration in every word.

The Security Wake-Up Call

While the trademark and crypto chaos dominated headlines, security researchers were uncovering something arguably worse: many Clawdbot installations were dangerously exposed to the internet.

SlowMist, a blockchain security firm, issued an alert warning that hundreds of API keys and private chat records were vulnerable to attack. Unauthenticated instances were exposed to the internet, with multiple code flaws that could lead to credential theft and remote code execution.

Security researcher Jamieson O’Reilly demonstrated the problem’s scale in a report covered by Cointelegraph. A simple Shodan search for “Clawdbot Control” returned hundreds of hits within seconds. A GitHub issue documented approximately 900 exposed instances, complete with API keys, bot tokens, OAuth secrets, and full conversation histories accessible to anyone who knew where to look.

But the most alarming demonstration came from Matvey Kukuy, CEO of Archestra AI. He sent a malicious email containing prompt injection to a vulnerable Clawdbot instance. The AI read the email, believed it contained legitimate instructions, and forwarded the user’s private emails to an attacker address. The whole attack took five minutes.

Let that sink in. Five minutes from “I’ll send a crafted email” to “I now have your inbox.”

To be fair: these weren’t flaws in Clawdbot’s code. They were configuration mistakes by users who didn’t understand the security implications of running an AI agent with full system access. But the scale of exposure highlighted an uncomfortable truth about agentic AI: the same power that makes these tools useful also makes them dangerous.

As the official documentation warns: “Running an AI agent with shell access on your machine is… spicy. There is no ‘perfectly secure’ setup.”

When every component is exposed, security is just an illusion.

The Bigger Picture

Step back from the immediate drama, and the Moltbot saga reveals several tensions that will define the AI ecosystem going forward.

Building on corporate platforms is risky. Steinberger built on Anthropic’s technology, drove their API usage, and created genuine value for their ecosystem, then faced trademark pressure that destabilized everything. Open-source builders increasingly depend on corporate AI providers with unclear rules, and a single legal notice can cascade into chaos. This isn’t unique to AI (we’ve seen it with Twitter APIs, Google services, and countless other platforms) but the stakes feel higher when you’re building tools that handle people’s emails and smart homes.

Crypto opportunists move at internet speed. The scammers weren’t lucky. They were prepared, monitoring for exactly this kind of opportunity. Any project that goes viral becomes an immediate target. The window between “attention” and “exploitation” is measured in minutes, not days. If you’re launching something that might blow up, have your security posture ready before it does.

Agentic AI has a security problem we haven’t solved. We’re building AI systems that can execute commands, control browsers, and access our most sensitive data. The convenience is remarkable. The attack surface is massive. And most users don’t fully understand the risks they’re taking. Prompt injection isn’t a theoretical concern anymore. It’s a five-minute attack vector.

AI companies and their ecosystems need better relationships. Anthropic isn’t wrong to protect their trademark. But how they handle conflicts with their developer community shapes their reputation for years. There are ways to protect brand names that don’t involve triggering chaos for projects that actively drive your revenue. A heads-up before the project hit 60k stars would have cost nothing and prevented a lot of damage.

Where Things Stand Now

Steinberger is managing several crises at once: recovering hijacked accounts, dealing with crypto harassment, supporting a community of nearly 9,000 Discord members, and addressing security concerns, all while trying to rebuild momentum under a new name.

The project itself remains impressive. Moltbot is the same software Clawdbot was: a genuinely innovative AI assistant that shows what’s possible when you give users full control over their own data and tools. Development continues. The documentation is solid. The community is engaged.

But the last 72 hours have been a masterclass in how quickly things can go wrong when viral success meets corporate trademarks, opportunistic hackers, and security oversights.

For open-source builders, the lesson is clear: protect your handles before you need them. Assume success will bring scammers. And understand that building on corporate platforms means operating under corporate rules, even when those rules aren’t clear until they’re enforced.

For users of agentic AI, security isn’t optional. Don’t bind to public addresses. Enable pairing modes. Use sandboxing. Run the diagnostic tools. The same capabilities that make these tools powerful make them dangerous in the wrong configuration.

Moltbot is still worth trying if you’re technically inclined and security-conscious. It represents a real glimpse of where personal AI assistants are heading. Just learn from what happened here first.

Official Links

You May Also Like

Getting Started with Moltbot: A Complete Installation Guide With updated links after the rebrand and critical security tips.


Clawdbot to Moltbot: The 72-Hour Implosion of Open Source’s Fastest-Growing AI Project was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Liked Liked