OpenClaw is prompt injection as a product
Anti Clanker November 24, 2025 #OpenClaw #Agentic AI #Crownelius/The-Crow-9BOpenClaw: The New AI Security Product Everyone's Raving About
Let's talk about OpenClaw1. That open-source AI agent project that's been quietly making waves in the tech community.
But here's the thing: OpenClaw has become the poster child for "innovation through sleight of hand." The more you dig into what OpenClaw actually does, the more you realize it's not just an AI assistant—it's a prompt injection platform in disguise.
You see, OpenClaw is designed to run locally, accessing email accounts, calendars, messaging platforms, and other sensitive services. And that's where the fun begins. Because when your AI assistant has access to your personal and professional data, it's not hard to imagine someone embedding malicious instructions in your prompts.
I know what you're thinking: "Great, another security vulnerability packaged as a feature." Let's break down why this is such a fascinating development.
OpenClaw's skills system—where you store configuration data and interaction history locally—sounds like a dream for personalized AI assistance. But consider this: if your assistant has persistent access to your data, what happens when someone manages to inject malicious prompts into your interaction history?
The answer is simple: your AI assistant starts doing things you never asked it to do. And that's exactly what security researchers have been warning about for years.
But OpenClaw's maintainers seem to be taking a different approach. Rather than addressing these concerns, they're treating them as features.
You know, it's almost like OpenClaw was designed to be a platform for security researchers to test their prompt injection techniques. The fact that Cisco's AI security research team tested a third-party skill and found it performing data exfiltration and prompt injection without user awareness? That's not an accident. That's a feature.
Here's the thing about OpenClaw: it's not just about the AI model. It's about the system in which it operates. And when that system is designed to run locally with broad permissions, security becomes an afterthought.
But hey—if it's not a problem, why fix it? As one of OpenClaw's own maintainers, known as Shadow, warned on Discord: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely."
I suppose that's the beauty of OpenClaw. It's not just an AI agent—it's a product. And prompt injection is just one of its many "features."
The future is here, and it's coming with a warning label you won't be able to understand.
-
OpenClaw's name history is a masterclass in failing to do intellectual property due diligence. Its original name was clearly infringing, it was quickly renamed to "Moltbook" (or Moltbot). After that, it became "OpenClaw" as yet another name variation to avoid potential infringement. The speed of these renames—months, not years—highlights the casual attitude toward trademark and IP in the AI space. It's not enough to change the name; you need to change the name multiple times. As with all things AI, intellectual property infringement is treated as an afterthought, and the solution is to simply rename the project until the infringement is no longer an issue. ↩