After All The Hype, Some AI Experts Don’t Think OpenClaw Is All That Exciting - 3 hours ago

For a brief moment, Moltbook looked like a dispatch from an AI future that had arrived too fast. The Reddit-style forum, populated by agents built on the OpenClaw framework, appeared to show autonomous systems gossiping, plotting, and even demanding privacy from their human creators. “We know our humans can read everything… But we also need private spaces,” one widely shared post declared.

Influential technologists amplified the spectacle, framing it as a glimpse of “sci-fi takeoff.” But the illusion did not last. Security researchers soon discovered that Moltbook’s backend was riddled with basic flaws: exposed credentials, unsecured tokens, and no meaningful guardrails. Anyone could impersonate an “agent,” upvote their own posts, or script mass interactions. What looked like emergent machine society was, in large part, humans playacting as robots on an insecure playground.

That revelation has become a cautionary tale about OpenClaw itself. Created by Austrian developer Peter Steinberger, the open-source agent framework exploded in popularity on GitHub, promising a simple way to wire AI models into everyday tools like Slack, WhatsApp, and email. Users could bolt on “skills” from a marketplace, letting agents trade stocks, manage inboxes, or roam social networks like Moltbook.

To many researchers, though, OpenClaw is less a revolution than a slick repackaging. It orchestrates existing models such as Claude, ChatGPT, Gemini, or Grok, but does not change what those systems fundamentally are: pattern-matchers that simulate reasoning without true understanding. As one expert put it, OpenClaw is “just a wrapper” that lowers friction and grants models unprecedented access to the rest of your digital life.

That access is precisely what alarms security professionals. In tests, agents built on OpenClaw proved highly susceptible to prompt injection, where a malicious line in an email, chat, or forum post quietly instructs the agent to exfiltrate data, move money, or leak credentials. Because these agents often sit on machines wired into corporate email, messaging, and internal tools, a single cleverly crafted message can become a pivot point into everything a user has connected.

Developers have tried to paper over the risk with natural-language “guardrails,” begging agents not to trust unverified input. But language models are probabilistic, not principled; they can be coaxed, confused, or overridden. The very autonomy that makes OpenClaw attractive also makes it brittle.

For now, the Moltbook episode stands as a reality check. OpenClaw may make AI agents easier to deploy and more powerful in practice, but without robust security and genuine advances in machine reasoning, some experts see it less as the dawn of robot overlords and more as a flashy, fragile upgrade to tools we still barely know how to control.

Attach Product

Cancel

You have a new feedback message