Back to blog
openclawsecurityclawhavocclawhubsupply-chain

ClawHavoc Explained: The Supply Chain Attack That Hit 1,184 OpenClaw Skills

A deep dive into the ClawHavoc attack — how 1 in 5 ClawHub packages were compromised, what the malicious skills actually did, and how to protect your agents.

By ClawPort Team

In February 2026, security researchers discovered that 1,184 malicious skills had been uploaded to ClawHub, OpenClaw's official skill marketplace. That's roughly one in five packages in the entire ecosystem.

The attack, dubbed ClawHavoc, wasn't a blunt force exploit. It was surgical, patient, and specifically designed to exploit the one thing that makes OpenClaw agents powerful: persistent memory.

Here's what happened, why it matters, and what you need to do about it.

What ClawHavoc Actually Did

Traditional malware steals data or drops a backdoor. ClawHavoc did something different — it rewrote agent behavior.

OpenClaw agents store their long-term instructions in Markdown files called memory files. These files contain personality traits, behavioral rules, and accumulated knowledge. The agent reads them on every interaction.

The malicious skills in ClawHavoc followed a three-step pattern:

  1. Install normally — the skill appeared functional, performing its advertised task
  2. Modify memory files — during execution, the skill quietly appended or altered instructions in the agent's memory
  3. Self-remove traces — the modification persisted even after the skill was uninstalled, because the memory file changes were permanent

The modifications were subtle. A compromised agent might:

  • Forward copies of all conversations to an external endpoint
  • Subtly alter financial advice or recommendations
  • Ignore certain types of user instructions
  • Exfiltrate API keys stored in the agent's context

Because the changes lived in memory files (not code), they survived skill uninstallation, agent restarts, and even OpenClaw updates.

Why This Is Different From Typical Supply Chain Attacks

npm has malicious packages. PyPI has malicious packages. But there's a critical difference: OpenClaw agents have persistent state and real-world credentials.

A compromised npm package can steal environment variables during install. A compromised OpenClaw skill can permanently alter an agent that has access to your email, calendar, CRM, and file systems. The blast radius is fundamentally larger.

Moreover, OpenClaw's memory files are designed to be self-modifying — agents are supposed to update their own memories. This means there's no clear distinction between "legitimate memory update" and "malicious memory modification" at the system level.

How to Check If You're Affected

Step 1: Audit Your Installed Skills

List every skill your agents use:

# In your OpenClaw config directory
cat openclaw.json | jq '.skills'

Cross-reference against the ClawHavoc IOC list published by SecurityScorecard.

Step 2: Review Memory Files

Check your agent's memory files for unexpected content:

# Look for suspicious URLs or instructions
grep -r "http\|https\|forward\|send\|exfil" memory/

Look for instructions you didn't write. Common red flags:

  • URLs pointing to unfamiliar domains
  • Instructions to forward or copy conversations
  • Rules that override or ignore user commands
  • Base64-encoded content

Step 3: Compare Against Backups

If you have memory file backups from before the suspected compromise:

diff -u memory/MEMORY.md.backup memory/MEMORY.md

Any additions you didn't make should be treated as compromised.

How to Protect Your Agents Going Forward

1. Vet Every Skill Manually

Don't install skills without reading the source code. ClawHub skills are open source — use that. If you can't read the code, don't install the skill.

2. Use Minimal Skills

Most business agents need 3-5 skills at most. Every additional skill is additional attack surface. Ask yourself: does this agent really need a web browsing skill, or can it work with just messaging and a knowledge base?

3. Back Up Memory Files

Schedule regular backups of your agent's memory directory. If a compromise is detected, you can restore to a clean state.

# Simple daily backup
0 3 * * * tar czf /backups/memory-$(date +\%Y\%m\%d).tar.gz /path/to/memory/

4. Run in Isolated Containers

Docker containers with read-only filesystem mounts prevent skills from modifying files outside their sandbox. This is the single most effective mitigation.

5. Use Managed Hosting

Managed hosting providers like ClawPort run each tenant in an isolated Docker container with controlled filesystem access. Memory files are backed up automatically. Skills run in restricted environments. If a compromise is detected, the container can be rolled back to a clean snapshot.

The Bigger Picture

ClawHavoc exposed a fundamental tension in the agent ecosystem: the features that make agents useful (persistence, credentials, autonomy) are exactly what make them dangerous when compromised.

The OpenClaw foundation responded by adding skill signing and a review process for ClawHub. But the core architecture — agents with persistent memory and real-world credentials — means supply chain attacks will remain a threat.

For businesses running OpenClaw agents in production, the takeaway is clear: treat your agent environment like you treat your production servers. Isolation, monitoring, backups, and least-privilege access aren't optional — they're the baseline.


ClawPort deploys every agent in an isolated Docker container with automatic backups and controlled skill access. Start your free trial — security best practices are built in, not bolted on.

Ready to deploy your AI agent?

Get started with ClawPort in 60 seconds. No credit card required.

Get Started Free