Back to blog
tutorialdockeropenclaw

OpenClaw on Docker: Production Setup That Won't Get You Hacked

Most OpenClaw Docker tutorials give you a one-liner that works but leaves your server wide open. Here's the production-grade Docker Compose setup with bridge networking, resource limits, backups, and monitoring.

By ClawPort Team

Every OpenClaw Docker tutorial starts the same way:

docker run -d -p 19001:19001 openclaw/openclaw

It works. Your bot responds. You move on.

Six weeks later, you find someone else's conversations in your gateway logs. Your API key is on a Pastebin. Your server is mining crypto.

This is the production setup guide. It's longer than one line.

The one-liner and everything wrong with it

Let's dissect what docker run -d -p 19001:19001 openclaw/openclaw actually does:

  1. -p 19001:19001 — Binds port 19001 to 0.0.0.0, meaning every network interface, meaning the public internet. Anyone can hit http://your-ip:19001 and access the full gateway.

  2. No network isolation — The container runs on Docker's default bridge, which can reach the host network and other containers.

  3. No resource limits — The container can use all available RAM and CPU. One memory leak or traffic spike and your entire server is down.

  4. No restart policy — If the container crashes, it stays dead until you notice.

  5. No volume mounts — Your workspace files (SOUL.md, conversations, memory) live inside the container. When you update the image, they're gone.

  6. No config file — You're running with defaults, which means the control UI is enabled, tools are unrestricted, and there's no rate limiting.

Every single one of these is a problem in production. Let's fix all of them.

The production docker-compose.yml

Here's what we run at ClawPort for every customer container. Annotated line by line.

version: '3.8'

services:
  openclaw:
    image: openclaw/openclaw:latest
    container_name: openclaw-main

    # Restart on crash, but not if we explicitly stopped it
    restart: unless-stopped

    # Isolated bridge network — can't reach host or other containers
    networks:
      - openclaw-net

    # CRITICAL: 127.0.0.1 = localhost only. NOT reachable from the internet.
    ports:
      - "127.0.0.1:19001:19001"

    # Persistent storage — survives container rebuilds
    volumes:
      - ./config/openclaw.json:/app/openclaw.json:ro  # Read-only config
      - ./workspaces:/app/workspaces                   # Agent files
      - ./data:/app/data                               # Persistent data

    # API keys as environment variables — not in config file
    environment:
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - OPENAI_API_KEY=${OPENAI_API_KEY}

    # Resource limits — prevent runaway usage
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '1.0'
        reservations:
          memory: 128M

    # Watchtower label — opt in to update monitoring
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

    # Health check — auto-restart if unhealthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:19001/api/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 15s

  # Update monitoring — notify, don't auto-update
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: --monitor-only --label-enable --interval 3600
    environment:
      - WATCHTOWER_NOTIFICATIONS=email
      - [email protected]
      - [email protected]
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.resend.com
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=resend
      - WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=${RESEND_API_KEY}

networks:
  openclaw-net:
    driver: bridge

Create a .env file for your secrets:

# .env — NEVER commit this to git
ANTHROPIC_API_KEY=sk-ant-your-key-here
OPENAI_API_KEY=sk-proj-your-key-here
RESEND_API_KEY=re_your-key-here
chmod 600 .env

Bridge networking: why it matters

The default Docker networking mode lets containers reach the host machine and other containers on the same Docker daemon. This means:

  • A compromised container can scan your host's ports
  • It can reach your database, your SSH agent, your other services
  • It can talk to other tenant containers (if you're running multiple)

Bridge networking creates an isolated network segment:

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│ Host (your server)                       │
│                                          │
│  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”     │
│  │ openclaw-net │  │ other-net    │     │
│  │              │  │              │     │
│  │  Container A │  │  Container B │     │
│  │  (isolated)  │  │  (isolated)  │     │
│  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜     │
│                                          │
│  Containers can't see each other         │
│  Containers can't reach the host         │
│  Only exposed via 127.0.0.1 port binds   │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

If you're running multiple OpenClaw agents (for different customers, or different use cases), create a separate network per tenant:

docker network create tenant-alice
docker network create tenant-bob

This gives you true network isolation between tenants — Alice's container literally cannot reach Bob's, even if someone gets shell access inside the container.

Resource limits: preventing the 3 AM crash

Without resource limits, a single container can consume all available RAM. When that happens, the Linux OOM killer starts terminating processes — and it doesn't care which ones are important.

We've been there. Server went down at 3 AM because a container ate all 8GB of RAM. No swap configured, no resource limits. Everything died.

Prevention:

deploy:
  resources:
    limits:
      memory: 512M    # Hard cap — container gets killed if exceeded
      cpus: '1.0'     # Max 1 CPU core
    reservations:
      memory: 128M    # Guaranteed minimum

Also add swap to your host (not the container):

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab

The limits.memory catches runaway containers. The swap catches everything else. Belt and suspenders.

How much memory does OpenClaw actually need?

SetupIdle RAMActive RAM
1 agent, no active chats50-80 MB—
1 agent, 5 concurrent chats80-150 MB200-300 MB peaks
2 agents, moderate traffic100-200 MB300-500 MB peaks

512MB limit is comfortable for a single agent. If you're running multiple agents in one container, go to 1GB.

Volume management

Three directories need to survive container rebuilds:

volumes:
  - ./config/openclaw.json:/app/openclaw.json:ro  # Config (read-only)
  - ./workspaces:/app/workspaces                   # Agent personality + memory
  - ./data:/app/data                               # Conversation data

The :ro flag on the config mount means the container can read the config but can't modify it. If someone compromises the container, they can't change the configuration.

Backup your workspaces:

#!/bin/bash
# backup.sh — run daily via cron
BACKUP_DIR="/var/backups/openclaw/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
cp -r /opt/openclaw/workspaces "$BACKUP_DIR/"
cp /opt/openclaw/config/openclaw.json "$BACKUP_DIR/"
# Keep last 30 days
find /var/backups/openclaw -maxdepth 1 -mtime +30 -exec rm -rf {} \;
chmod +x backup.sh
crontab -e
# Add: 0 3 * * * /opt/openclaw/backup.sh

Health checks and auto-recovery

The healthcheck in the Compose file pings the container every 30 seconds:

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:19001/api/health"]
  interval: 30s
  timeout: 5s
  retries: 3
  start_period: 15s

If the health check fails 3 times in a row, Docker marks the container as unhealthy. Combined with restart: unless-stopped, Docker will automatically restart it.

Monitor from outside too. The health check only catches crashes. For full monitoring:

# Quick status check script
#!/bin/bash
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:19001/api/health)
if [ "$STATUS" != "200" ]; then
    echo "OpenClaw is down! Status: $STATUS" | mail -s "Alert" [email protected]
fi

Updating OpenClaw

When a new version is released:

cd /opt/openclaw

# Pull the latest image
docker compose pull

# Recreate with new image (keeps volumes)
docker compose up -d

# Verify
docker logs openclaw-main --tail 20

Downtime: About 5-10 seconds. Telegram will queue messages during this time and deliver them when the bot comes back.

Never use docker compose down unless you're intentionally stopping the bot. It removes the container. docker compose up -d is enough to recreate with the new image.

Watchtower in monitor-only mode will email you when a new image is available, so you don't have to check manually.

Running multiple agents

If you want multiple OpenClaw agents on one server, you have two options:

Option A: Multiple agents, one container

OpenClaw supports multiple agents in a single config:

{
  "agents": [
    { "slug": "support", "name": "Support Bot", "model": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" } },
    { "slug": "sales", "name": "Sales Bot", "model": { "provider": "openai", "model": "gpt-4o" } }
  ],
  "channels": [
    { "type": "telegram", "agentSlug": "support", "config": { "token": "TOKEN_1" } },
    { "type": "telegram", "agentSlug": "sales", "config": { "token": "TOKEN_2" } }
  ]
}

Simple, but both agents share resources and if one crashes, both go down.

Option B: One container per agent (recommended)

services:
  support-bot:
    image: openclaw/openclaw:latest
    container_name: openclaw-support
    networks: [openclaw-net]
    ports: ["127.0.0.1:19001:19001"]
    volumes:
      - ./config/support.json:/app/openclaw.json:ro
      - ./workspaces/support:/app/workspaces

  sales-bot:
    image: openclaw/openclaw:latest
    container_name: openclaw-sales
    networks: [openclaw-net]
    ports: ["127.0.0.1:19002:19001"]
    volumes:
      - ./config/sales.json:/app/openclaw.json:ro
      - ./workspaces/sales:/app/workspaces

Notice the port mapping: 19002:19001 — the container always listens on 19001 internally, but you map it to different host ports.

Your reverse proxy then routes to each:

support.yourdomain.com → 127.0.0.1:19001
sales.yourdomain.com   → 127.0.0.1:19002

This is the approach we use at ClawPort — one container per customer, complete isolation, independent restarts and updates.

The downside: you're now managing port allocation, multiple reverse proxy configs, multiple SSL certificates, and per-container monitoring. For each new agent.

The full production checklist

Before you consider your Docker setup production-ready:

  • Ports bound to 127.0.0.1 (not 0.0.0.0)
  • Bridge network (not default/host)
  • Memory limits set
  • CPU limits set
  • Swap configured on host (4GB+)
  • Config mounted read-only
  • API keys in .env file (not config)
  • .env permissions set to 600
  • Workspace volumes mounted
  • Health check configured
  • restart: unless-stopped
  • Watchtower monitoring (not auto-updating)
  • Daily backup cron
  • Reverse proxy with SSL in front
  • Firewall configured (only 22, 80, 443)

That's 15 items to configure correctly. Miss one and you have either a security hole or a reliability problem.


At ClawPort, we provision all of this automatically for every customer container — bridge networking, resource limits, health checks, SSL, monitoring, backups. You give us a bot name and a Telegram token. We give you a running agent in 60 seconds. See for yourself →

Ready to deploy your AI agent?

Get started with ClawPort in 60 seconds. No credit card required.

Get Started Free