Running Multiple OpenClaw Agents on One Server
One bot is easy. Three bots on one server? That's port management, reverse proxy routing, resource allocation, and monitoring per agent. Here's the production setup for multi-agent OpenClaw deployments.
Running one OpenClaw agent is straightforward. Our Docker guide covers it in detail.
Running three agents on the same server is where things get interesting. Each agent needs its own port, its own reverse proxy config, its own SSL certificate (or shared wildcard), its own workspace, and its own monitoring. Miss any of these and you get port conflicts, certificate errors, or agents that silently stop working.
Here's the setup we use at ClawPort for running dozens of agents on shared infrastructure.
The architecture
βββββββββββββββββββββββββββββββββββββββββββ
β Your Server β
Internet βββββββββββΆβ β
β ββββββββββββββββββββββββββββββββββββ β
β β Reverse Proxy (nginx/Caddy/NPM) β β
β β - SSL termination β β
β β - Route by hostname β β
β β - Only webhook paths allowed β β
β ββββββββββββ¬βββββββββββ¬βββββββββββ β
β β β β β
β ββββββββ ββββββββ ββββββββ β
β β:19001β β:19002β β:19003β β
β βAgent1β βAgent2β βAgent3β β
β β(net1)β β(net2)β β(net3)β β
β ββββββββ ββββββββ ββββββββ β
β bridge bridge bridge β
βββββββββββββββββββββββββββββββββββββββββββ
Key principles:
- Each agent gets a unique host port (19001, 19002, 19003...)
- Each agent gets its own Docker bridge network (true isolation)
- All ports bound to
127.0.0.1(not accessible from internet) - Reverse proxy routes by hostname to the correct agent
- Each agent has its own workspace directory
Port allocation
The most common mistake with multi-agent setups is port conflicts. OpenClaw always listens on the same internal port (19001 by default). You use Docker port mapping to assign unique host ports:
# Agent 1: internal 19001 β host 19001
ports: ["127.0.0.1:19001:19001"]
# Agent 2: internal 19001 β host 19002
ports: ["127.0.0.1:19002:19001"]
# Agent 3: internal 19001 β host 19003
ports: ["127.0.0.1:19003:19001"]
Track your port assignments. It sounds trivial, but after 5+ agents, you will forget which port is which. Keep a manifest:
# /opt/openclaw/PORT_MAP.md
# Port | Agent Name | Owner | Channel
# 19001 | SupportBot | TechShop | Telegram
# 19002 | SalesBot | TechShop | WhatsApp
# 19003 | PropertyBot | Amsterdam RE | WhatsApp
# 19004 | (available) | |
# 19005 | (available) | |
Network isolation
If you're running agents for different customers (or even different use cases), they should NOT share a Docker network. A shared network means Agent 1 can potentially reach Agent 2's internal port.
# docker-compose.yml
services:
agent-support:
image: openclaw/openclaw:latest
container_name: openclaw-support
networks: [net-support]
ports: ["127.0.0.1:19001:19001"]
volumes:
- ./agents/support/config.json:/app/openclaw.json:ro
- ./agents/support/workspaces:/app/workspaces
agent-sales:
image: openclaw/openclaw:latest
container_name: openclaw-sales
networks: [net-sales]
ports: ["127.0.0.1:19002:19001"]
volumes:
- ./agents/sales/config.json:/app/openclaw.json:ro
- ./agents/sales/workspaces:/app/workspaces
agent-property:
image: openclaw/openclaw:latest
container_name: openclaw-property
networks: [net-property]
ports: ["127.0.0.1:19003:19001"]
volumes:
- ./agents/property/config.json:/app/openclaw.json:ro
- ./agents/property/workspaces:/app/workspaces
networks:
net-support:
driver: bridge
net-sales:
driver: bridge
net-property:
driver: bridge
Three containers, three networks, zero cross-talk.
Directory structure
Keep things organized from the start:
/opt/openclaw/
βββ docker-compose.yml
βββ .env # Shared secrets
βββ PORT_MAP.md # Port allocation tracking
βββ agents/
β βββ support/
β β βββ config.json # OpenClaw config
β β βββ workspaces/
β β βββ main/
β β βββ SOUL.md
β β βββ AGENTS.md
β β βββ MEMORY.md
β βββ sales/
β β βββ config.json
β β βββ workspaces/
β β βββ main/
β β βββ SOUL.md
β β βββ ...
β βββ property/
β βββ config.json
β βββ workspaces/
β βββ main/
β βββ SOUL.md
β βββ ...
βββ backups/
βββ 2026-03-08/
Reverse proxy routing
Each agent needs its own hostname (or path prefix). Hostname routing is cleaner:
With Caddy
support.yourdomain.com {
handle /api/telegram/webhook {
reverse_proxy 127.0.0.1:19001
}
handle { respond "Forbidden" 403 }
}
sales.yourdomain.com {
handle /api/whatsapp/webhook {
reverse_proxy 127.0.0.1:19002
}
handle { respond "Forbidden" 403 }
}
property.yourdomain.com {
handle /api/whatsapp/webhook {
reverse_proxy 127.0.0.1:19003
}
handle { respond "Forbidden" 403 }
}
Caddy automatically provisions a separate SSL certificate for each hostname.
With nginx + wildcard SSL
If you have a wildcard certificate (*.yourdomain.com), you can use one cert for all agents:
server {
listen 443 ssl;
server_name *.yourdomain.com;
ssl_certificate /etc/ssl/wildcard.yourdomain.com.pem;
ssl_certificate_key /etc/ssl/wildcard.yourdomain.com.key;
# Route by hostname
set $backend "";
if ($host = "support.yourdomain.com") { set $backend "127.0.0.1:19001"; }
if ($host = "sales.yourdomain.com") { set $backend "127.0.0.1:19002"; }
if ($host = "property.yourdomain.com") { set $backend "127.0.0.1:19003"; }
location /api/telegram/webhook { proxy_pass http://$backend; }
location /api/whatsapp/webhook { proxy_pass http://$backend; }
location / { return 403; }
}
Resource allocation
With multiple agents sharing one server, resource management matters:
deploy:
resources:
limits:
memory: 512M # Per agent
cpus: '0.5' # Half a CPU core
reservations:
memory: 128M
Plan your capacity:
| Server | RAM | Agents (comfortable) | Agents (maximum) |
|---|---|---|---|
| 2GB | 1GB usable | 1-2 | 3 |
| 4GB | 3GB usable | 3-5 | 8 |
| 8GB | 6GB usable | 8-10 | 15 |
| 16GB | 14GB usable | 15-25 | 40 |
"Usable" accounts for OS overhead, reverse proxy, monitoring, and Docker itself.
At ClawPort, we run on 8GB servers (Hetzner CPX32) and comfortably host 8-10 agents per server with headroom for traffic spikes.
Monitoring per agent
With one agent, checking docker logs is enough. With five, you need structured monitoring:
#!/bin/bash
# check-agents.sh β run every 5 minutes via cron
echo "=== Agent Health Check $(date) ===" >> /var/log/openclaw-health.log
for port in 19001 19002 19003; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:$port/api/health 2>/dev/null)
CONTAINER=$(docker ps --filter "publish=$port" --format "{{.Names}}" 2>/dev/null)
if [ "$STATUS" != "200" ]; then
echo "ALERT: $CONTAINER (port $port) is DOWN (status: $STATUS)" >> /var/log/openclaw-health.log
# Send alert via email/Telegram/webhook
curl -s -X POST "https://api.telegram.org/bot<YOUR_ALERT_BOT_TOKEN>/sendMessage" \
-d "chat_id=<YOUR_CHAT_ID>&text=π¨ Agent $CONTAINER (port $port) is DOWN"
else
echo "OK: $CONTAINER (port $port)" >> /var/log/openclaw-health.log
fi
done
chmod +x /opt/openclaw/check-agents.sh
crontab -e
# Add: */5 * * * * /opt/openclaw/check-agents.sh
Updating agents independently
With multiple agents, you don't always want to update them all at once. One might be on a stable version while you test a new release on another.
# Update just one agent
docker compose pull agent-support
docker compose up -d agent-support
# Check it's working
docker logs openclaw-support --tail 20
curl http://127.0.0.1:19001/api/health
# Then update the others
docker compose pull agent-sales
docker compose up -d agent-sales
Backup strategy
Each agent needs independent backups:
#!/bin/bash
# backup-all.sh
DATE=$(date +%Y-%m-%d)
BACKUP_DIR="/var/backups/openclaw/$DATE"
mkdir -p "$BACKUP_DIR"
for agent in support sales property; do
mkdir -p "$BACKUP_DIR/$agent"
cp -r /opt/openclaw/agents/$agent/workspaces "$BACKUP_DIR/$agent/"
cp /opt/openclaw/agents/$agent/config.json "$BACKUP_DIR/$agent/"
done
# Keep 30 days
find /var/backups/openclaw -maxdepth 1 -mtime +30 -exec rm -rf {} \;
echo "Backup complete: $BACKUP_DIR"
The maintenance multiplication problem
Here's the reality of multi-agent self-hosting:
| Task | 1 agent | 5 agents |
|---|---|---|
| Initial setup | 3 hours | 8-12 hours |
| Port allocation | Trivial | Track carefully |
| Reverse proxy | 1 config block | 5 config blocks + wildcard SSL |
| Monitoring | Check one log | Check five logs + health script |
| Updates | 1 command | 5 commands (staged) |
| Backups | 1 directory | 5 directories |
| Debugging | 1 container | "Which container broke?" |
| Monthly maintenance | 2-3 hours | 6-10 hours |
It's not that any single task is hard. It's that everything multiplies. And when agent 3 stops responding at midnight, you need to figure out if it's a port conflict, a resource limit, a webhook issue, or a config error β across five containers.
ClawPort handles port allocation, network isolation, SSL, monitoring, backups, and updates for every agent automatically. Add a new agent from the dashboard β plans from $10/month, deployed in seconds, fully isolated. Scale your agents β
Ready to deploy your AI agent?
Get started with ClawPort in 60 seconds. No credit card required.
Get Started FreeRelated Articles
OpenClaw on Docker: Production Setup That Won't Get You Hacked
Most OpenClaw Docker tutorials give you a one-liner that works but leaves your server wide open. Here's the production-grade Docker Compose setup with bridge networking, resource limits, backups, and monitoring.
Add an AI Chatbot to Your Shopify Store (Without Apps)
How to connect an OpenClaw agent to your Shopify store for product recommendations, order tracking, and FAQ automation β without paying $50/month for a chatbot app.
How to Migrate From ChatGPT Assistants API to OpenClaw
Why developers are moving away from the OpenAI Assistants API, a full feature comparison, and step-by-step migration guide β including conversation history, file search, and function calling.
Build an AI Appointment Booking Agent (Google Calendar + OpenClaw)
How to build an AI agent that checks availability, books appointments, and sends confirmations using Google Calendar β ideal for service businesses, coaches, and consultants.