OpenClaw on Docker: Production Setup That Won't Get You Hacked
Most OpenClaw Docker tutorials give you a one-liner that works but leaves your server wide open. Here's the production-grade Docker Compose setup with bridge networking, resource limits, backups, and monitoring.
Every OpenClaw Docker tutorial starts the same way:
docker run -d -p 19001:19001 openclaw/openclaw
It works. Your bot responds. You move on.
Six weeks later, you find someone else's conversations in your gateway logs. Your API key is on a Pastebin. Your server is mining crypto.
This is the production setup guide. It's longer than one line.
The one-liner and everything wrong with it
Let's dissect what docker run -d -p 19001:19001 openclaw/openclaw actually does:
-
-p 19001:19001ā Binds port 19001 to0.0.0.0, meaning every network interface, meaning the public internet. Anyone can hithttp://your-ip:19001and access the full gateway. -
No network isolation ā The container runs on Docker's default bridge, which can reach the host network and other containers.
-
No resource limits ā The container can use all available RAM and CPU. One memory leak or traffic spike and your entire server is down.
-
No restart policy ā If the container crashes, it stays dead until you notice.
-
No volume mounts ā Your workspace files (SOUL.md, conversations, memory) live inside the container. When you update the image, they're gone.
-
No config file ā You're running with defaults, which means the control UI is enabled, tools are unrestricted, and there's no rate limiting.
Every single one of these is a problem in production. Let's fix all of them.
The production docker-compose.yml
Here's what we run at ClawPort for every customer container. Annotated line by line.
version: '3.8'
services:
openclaw:
image: openclaw/openclaw:latest
container_name: openclaw-main
# Restart on crash, but not if we explicitly stopped it
restart: unless-stopped
# Isolated bridge network ā can't reach host or other containers
networks:
- openclaw-net
# CRITICAL: 127.0.0.1 = localhost only. NOT reachable from the internet.
ports:
- "127.0.0.1:19001:19001"
# Persistent storage ā survives container rebuilds
volumes:
- ./config/openclaw.json:/app/openclaw.json:ro # Read-only config
- ./workspaces:/app/workspaces # Agent files
- ./data:/app/data # Persistent data
# API keys as environment variables ā not in config file
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
# Resource limits ā prevent runaway usage
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 128M
# Watchtower label ā opt in to update monitoring
labels:
- "com.centurylinklabs.watchtower.enable=true"
# Health check ā auto-restart if unhealthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:19001/api/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
# Update monitoring ā notify, don't auto-update
watchtower:
image: containrrr/watchtower
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --monitor-only --label-enable --interval 3600
environment:
- WATCHTOWER_NOTIFICATIONS=email
- [email protected]
- [email protected]
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.resend.com
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=587
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=resend
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=${RESEND_API_KEY}
networks:
openclaw-net:
driver: bridge
Create a .env file for your secrets:
# .env ā NEVER commit this to git
ANTHROPIC_API_KEY=sk-ant-your-key-here
OPENAI_API_KEY=sk-proj-your-key-here
RESEND_API_KEY=re_your-key-here
chmod 600 .env
Bridge networking: why it matters
The default Docker networking mode lets containers reach the host machine and other containers on the same Docker daemon. This means:
- A compromised container can scan your host's ports
- It can reach your database, your SSH agent, your other services
- It can talk to other tenant containers (if you're running multiple)
Bridge networking creates an isolated network segment:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Host (your server) ā
ā ā
ā āāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāā ā
ā ā openclaw-net ā ā other-net ā ā
ā ā ā ā ā ā
ā ā Container A ā ā Container B ā ā
ā ā (isolated) ā ā (isolated) ā ā
ā āāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāā ā
ā ā
ā Containers can't see each other ā
ā Containers can't reach the host ā
ā Only exposed via 127.0.0.1 port binds ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
If you're running multiple OpenClaw agents (for different customers, or different use cases), create a separate network per tenant:
docker network create tenant-alice
docker network create tenant-bob
This gives you true network isolation between tenants ā Alice's container literally cannot reach Bob's, even if someone gets shell access inside the container.
Resource limits: preventing the 3 AM crash
Without resource limits, a single container can consume all available RAM. When that happens, the Linux OOM killer starts terminating processes ā and it doesn't care which ones are important.
We've been there. Server went down at 3 AM because a container ate all 8GB of RAM. No swap configured, no resource limits. Everything died.
Prevention:
deploy:
resources:
limits:
memory: 512M # Hard cap ā container gets killed if exceeded
cpus: '1.0' # Max 1 CPU core
reservations:
memory: 128M # Guaranteed minimum
Also add swap to your host (not the container):
fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
The limits.memory catches runaway containers. The swap catches everything else. Belt and suspenders.
How much memory does OpenClaw actually need?
| Setup | Idle RAM | Active RAM |
|---|---|---|
| 1 agent, no active chats | 50-80 MB | ā |
| 1 agent, 5 concurrent chats | 80-150 MB | 200-300 MB peaks |
| 2 agents, moderate traffic | 100-200 MB | 300-500 MB peaks |
512MB limit is comfortable for a single agent. If you're running multiple agents in one container, go to 1GB.
Volume management
Three directories need to survive container rebuilds:
volumes:
- ./config/openclaw.json:/app/openclaw.json:ro # Config (read-only)
- ./workspaces:/app/workspaces # Agent personality + memory
- ./data:/app/data # Conversation data
The :ro flag on the config mount means the container can read the config but can't modify it. If someone compromises the container, they can't change the configuration.
Backup your workspaces:
#!/bin/bash
# backup.sh ā run daily via cron
BACKUP_DIR="/var/backups/openclaw/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
cp -r /opt/openclaw/workspaces "$BACKUP_DIR/"
cp /opt/openclaw/config/openclaw.json "$BACKUP_DIR/"
# Keep last 30 days
find /var/backups/openclaw -maxdepth 1 -mtime +30 -exec rm -rf {} \;
chmod +x backup.sh
crontab -e
# Add: 0 3 * * * /opt/openclaw/backup.sh
Health checks and auto-recovery
The healthcheck in the Compose file pings the container every 30 seconds:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:19001/api/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
If the health check fails 3 times in a row, Docker marks the container as unhealthy. Combined with restart: unless-stopped, Docker will automatically restart it.
Monitor from outside too. The health check only catches crashes. For full monitoring:
# Quick status check script
#!/bin/bash
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:19001/api/health)
if [ "$STATUS" != "200" ]; then
echo "OpenClaw is down! Status: $STATUS" | mail -s "Alert" [email protected]
fi
Updating OpenClaw
When a new version is released:
cd /opt/openclaw
# Pull the latest image
docker compose pull
# Recreate with new image (keeps volumes)
docker compose up -d
# Verify
docker logs openclaw-main --tail 20
Downtime: About 5-10 seconds. Telegram will queue messages during this time and deliver them when the bot comes back.
Never use docker compose down unless you're intentionally stopping the bot. It removes the container. docker compose up -d is enough to recreate with the new image.
Watchtower in monitor-only mode will email you when a new image is available, so you don't have to check manually.
Running multiple agents
If you want multiple OpenClaw agents on one server, you have two options:
Option A: Multiple agents, one container
OpenClaw supports multiple agents in a single config:
{
"agents": [
{ "slug": "support", "name": "Support Bot", "model": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" } },
{ "slug": "sales", "name": "Sales Bot", "model": { "provider": "openai", "model": "gpt-4o" } }
],
"channels": [
{ "type": "telegram", "agentSlug": "support", "config": { "token": "TOKEN_1" } },
{ "type": "telegram", "agentSlug": "sales", "config": { "token": "TOKEN_2" } }
]
}
Simple, but both agents share resources and if one crashes, both go down.
Option B: One container per agent (recommended)
services:
support-bot:
image: openclaw/openclaw:latest
container_name: openclaw-support
networks: [openclaw-net]
ports: ["127.0.0.1:19001:19001"]
volumes:
- ./config/support.json:/app/openclaw.json:ro
- ./workspaces/support:/app/workspaces
sales-bot:
image: openclaw/openclaw:latest
container_name: openclaw-sales
networks: [openclaw-net]
ports: ["127.0.0.1:19002:19001"]
volumes:
- ./config/sales.json:/app/openclaw.json:ro
- ./workspaces/sales:/app/workspaces
Notice the port mapping: 19002:19001 ā the container always listens on 19001 internally, but you map it to different host ports.
Your reverse proxy then routes to each:
support.yourdomain.com ā 127.0.0.1:19001
sales.yourdomain.com ā 127.0.0.1:19002
This is the approach we use at ClawPort ā one container per customer, complete isolation, independent restarts and updates.
The downside: you're now managing port allocation, multiple reverse proxy configs, multiple SSL certificates, and per-container monitoring. For each new agent.
The full production checklist
Before you consider your Docker setup production-ready:
- Ports bound to
127.0.0.1(not0.0.0.0) - Bridge network (not default/host)
- Memory limits set
- CPU limits set
- Swap configured on host (4GB+)
- Config mounted read-only
- API keys in
.envfile (not config) -
.envpermissions set to 600 - Workspace volumes mounted
- Health check configured
-
restart: unless-stopped - Watchtower monitoring (not auto-updating)
- Daily backup cron
- Reverse proxy with SSL in front
- Firewall configured (only 22, 80, 443)
That's 15 items to configure correctly. Miss one and you have either a security hole or a reliability problem.
At ClawPort, we provision all of this automatically for every customer container ā bridge networking, resource limits, health checks, SSL, monitoring, backups. You give us a bot name and a Telegram token. We give you a running agent in 60 seconds. See for yourself ā
Ready to deploy your AI agent?
Get started with ClawPort in 60 seconds. No credit card required.
Get Started FreeRelated Articles
Running Multiple OpenClaw Agents on One Server
One bot is easy. Three bots on one server? That's port management, reverse proxy routing, resource allocation, and monitoring per agent. Here's the production setup for multi-agent OpenClaw deployments.
Add an AI Chatbot to Your Shopify Store (Without Apps)
How to connect an OpenClaw agent to your Shopify store for product recommendations, order tracking, and FAQ automation ā without paying $50/month for a chatbot app.
How to Migrate From ChatGPT Assistants API to OpenClaw
Why developers are moving away from the OpenAI Assistants API, a full feature comparison, and step-by-step migration guide ā including conversation history, file search, and function calling.
Build an AI Appointment Booking Agent (Google Calendar + OpenClaw)
How to build an AI agent that checks availability, books appointments, and sends confirmations using Google Calendar ā ideal for service businesses, coaches, and consultants.