Back to blog
tutorialopenclawdocker

Running Multiple OpenClaw Agents on One Server

One bot is easy. Three bots on one server? That's port management, reverse proxy routing, resource allocation, and monitoring per agent. Here's the production setup for multi-agent OpenClaw deployments.

By ClawPort Team

Running one OpenClaw agent is straightforward. Our Docker guide covers it in detail.

Running three agents on the same server is where things get interesting. Each agent needs its own port, its own reverse proxy config, its own SSL certificate (or shared wildcard), its own workspace, and its own monitoring. Miss any of these and you get port conflicts, certificate errors, or agents that silently stop working.

Here's the setup we use at ClawPort for running dozens of agents on shared infrastructure.

The architecture

                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚           Your Server                    β”‚
Internet ──────────▢│                                         β”‚
                    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
                    β”‚  β”‚ Reverse Proxy (nginx/Caddy/NPM)  β”‚   β”‚
                    β”‚  β”‚   - SSL termination               β”‚   β”‚
                    β”‚  β”‚   - Route by hostname              β”‚   β”‚
                    β”‚  β”‚   - Only webhook paths allowed     β”‚   β”‚
                    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
                    β”‚             β”‚          β”‚          β”‚      β”‚
                    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”         β”‚
                    β”‚  β”‚:19001β”‚  β”‚:19002β”‚  β”‚:19003β”‚         β”‚
                    β”‚  β”‚Agent1β”‚  β”‚Agent2β”‚  β”‚Agent3β”‚         β”‚
                    β”‚  β”‚(net1)β”‚  β”‚(net2)β”‚  β”‚(net3)β”‚         β”‚
                    β”‚  β””β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”˜         β”‚
                    β”‚   bridge    bridge    bridge            β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key principles:

  • Each agent gets a unique host port (19001, 19002, 19003...)
  • Each agent gets its own Docker bridge network (true isolation)
  • All ports bound to 127.0.0.1 (not accessible from internet)
  • Reverse proxy routes by hostname to the correct agent
  • Each agent has its own workspace directory

Port allocation

The most common mistake with multi-agent setups is port conflicts. OpenClaw always listens on the same internal port (19001 by default). You use Docker port mapping to assign unique host ports:

# Agent 1: internal 19001 β†’ host 19001
ports: ["127.0.0.1:19001:19001"]

# Agent 2: internal 19001 β†’ host 19002
ports: ["127.0.0.1:19002:19001"]

# Agent 3: internal 19001 β†’ host 19003
ports: ["127.0.0.1:19003:19001"]

Track your port assignments. It sounds trivial, but after 5+ agents, you will forget which port is which. Keep a manifest:

# /opt/openclaw/PORT_MAP.md
# Port  | Agent Name    | Owner           | Channel
# 19001 | SupportBot    | TechShop        | Telegram
# 19002 | SalesBot      | TechShop        | WhatsApp
# 19003 | PropertyBot   | Amsterdam RE    | WhatsApp
# 19004 | (available)   |                 |
# 19005 | (available)   |                 |

Network isolation

If you're running agents for different customers (or even different use cases), they should NOT share a Docker network. A shared network means Agent 1 can potentially reach Agent 2's internal port.

# docker-compose.yml
services:
  agent-support:
    image: openclaw/openclaw:latest
    container_name: openclaw-support
    networks: [net-support]
    ports: ["127.0.0.1:19001:19001"]
    volumes:
      - ./agents/support/config.json:/app/openclaw.json:ro
      - ./agents/support/workspaces:/app/workspaces

  agent-sales:
    image: openclaw/openclaw:latest
    container_name: openclaw-sales
    networks: [net-sales]
    ports: ["127.0.0.1:19002:19001"]
    volumes:
      - ./agents/sales/config.json:/app/openclaw.json:ro
      - ./agents/sales/workspaces:/app/workspaces

  agent-property:
    image: openclaw/openclaw:latest
    container_name: openclaw-property
    networks: [net-property]
    ports: ["127.0.0.1:19003:19001"]
    volumes:
      - ./agents/property/config.json:/app/openclaw.json:ro
      - ./agents/property/workspaces:/app/workspaces

networks:
  net-support:
    driver: bridge
  net-sales:
    driver: bridge
  net-property:
    driver: bridge

Three containers, three networks, zero cross-talk.

Directory structure

Keep things organized from the start:

/opt/openclaw/
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .env                        # Shared secrets
β”œβ”€β”€ PORT_MAP.md                 # Port allocation tracking
β”œβ”€β”€ agents/
β”‚   β”œβ”€β”€ support/
β”‚   β”‚   β”œβ”€β”€ config.json         # OpenClaw config
β”‚   β”‚   └── workspaces/
β”‚   β”‚       └── main/
β”‚   β”‚           β”œβ”€β”€ SOUL.md
β”‚   β”‚           β”œβ”€β”€ AGENTS.md
β”‚   β”‚           └── MEMORY.md
β”‚   β”œβ”€β”€ sales/
β”‚   β”‚   β”œβ”€β”€ config.json
β”‚   β”‚   └── workspaces/
β”‚   β”‚       └── main/
β”‚   β”‚           β”œβ”€β”€ SOUL.md
β”‚   β”‚           └── ...
β”‚   └── property/
β”‚       β”œβ”€β”€ config.json
β”‚       └── workspaces/
β”‚           └── main/
β”‚               β”œβ”€β”€ SOUL.md
β”‚               └── ...
└── backups/
    └── 2026-03-08/

Reverse proxy routing

Each agent needs its own hostname (or path prefix). Hostname routing is cleaner:

With Caddy

support.yourdomain.com {
    handle /api/telegram/webhook {
        reverse_proxy 127.0.0.1:19001
    }
    handle { respond "Forbidden" 403 }
}

sales.yourdomain.com {
    handle /api/whatsapp/webhook {
        reverse_proxy 127.0.0.1:19002
    }
    handle { respond "Forbidden" 403 }
}

property.yourdomain.com {
    handle /api/whatsapp/webhook {
        reverse_proxy 127.0.0.1:19003
    }
    handle { respond "Forbidden" 403 }
}

Caddy automatically provisions a separate SSL certificate for each hostname.

With nginx + wildcard SSL

If you have a wildcard certificate (*.yourdomain.com), you can use one cert for all agents:

server {
    listen 443 ssl;
    server_name *.yourdomain.com;

    ssl_certificate /etc/ssl/wildcard.yourdomain.com.pem;
    ssl_certificate_key /etc/ssl/wildcard.yourdomain.com.key;

    # Route by hostname
    set $backend "";
    if ($host = "support.yourdomain.com") { set $backend "127.0.0.1:19001"; }
    if ($host = "sales.yourdomain.com") { set $backend "127.0.0.1:19002"; }
    if ($host = "property.yourdomain.com") { set $backend "127.0.0.1:19003"; }

    location /api/telegram/webhook { proxy_pass http://$backend; }
    location /api/whatsapp/webhook { proxy_pass http://$backend; }
    location / { return 403; }
}

Resource allocation

With multiple agents sharing one server, resource management matters:

deploy:
  resources:
    limits:
      memory: 512M     # Per agent
      cpus: '0.5'      # Half a CPU core
    reservations:
      memory: 128M

Plan your capacity:

ServerRAMAgents (comfortable)Agents (maximum)
2GB1GB usable1-23
4GB3GB usable3-58
8GB6GB usable8-1015
16GB14GB usable15-2540

"Usable" accounts for OS overhead, reverse proxy, monitoring, and Docker itself.

At ClawPort, we run on 8GB servers (Hetzner CPX32) and comfortably host 8-10 agents per server with headroom for traffic spikes.

Monitoring per agent

With one agent, checking docker logs is enough. With five, you need structured monitoring:

#!/bin/bash
# check-agents.sh β€” run every 5 minutes via cron

echo "=== Agent Health Check $(date) ===" >> /var/log/openclaw-health.log

for port in 19001 19002 19003; do
    STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:$port/api/health 2>/dev/null)
    CONTAINER=$(docker ps --filter "publish=$port" --format "{{.Names}}" 2>/dev/null)

    if [ "$STATUS" != "200" ]; then
        echo "ALERT: $CONTAINER (port $port) is DOWN (status: $STATUS)" >> /var/log/openclaw-health.log
        # Send alert via email/Telegram/webhook
        curl -s -X POST "https://api.telegram.org/bot<YOUR_ALERT_BOT_TOKEN>/sendMessage" \
          -d "chat_id=<YOUR_CHAT_ID>&text=🚨 Agent $CONTAINER (port $port) is DOWN"
    else
        echo "OK: $CONTAINER (port $port)" >> /var/log/openclaw-health.log
    fi
done
chmod +x /opt/openclaw/check-agents.sh
crontab -e
# Add: */5 * * * * /opt/openclaw/check-agents.sh

Updating agents independently

With multiple agents, you don't always want to update them all at once. One might be on a stable version while you test a new release on another.

# Update just one agent
docker compose pull agent-support
docker compose up -d agent-support

# Check it's working
docker logs openclaw-support --tail 20
curl http://127.0.0.1:19001/api/health

# Then update the others
docker compose pull agent-sales
docker compose up -d agent-sales

Backup strategy

Each agent needs independent backups:

#!/bin/bash
# backup-all.sh
DATE=$(date +%Y-%m-%d)
BACKUP_DIR="/var/backups/openclaw/$DATE"
mkdir -p "$BACKUP_DIR"

for agent in support sales property; do
    mkdir -p "$BACKUP_DIR/$agent"
    cp -r /opt/openclaw/agents/$agent/workspaces "$BACKUP_DIR/$agent/"
    cp /opt/openclaw/agents/$agent/config.json "$BACKUP_DIR/$agent/"
done

# Keep 30 days
find /var/backups/openclaw -maxdepth 1 -mtime +30 -exec rm -rf {} \;
echo "Backup complete: $BACKUP_DIR"

The maintenance multiplication problem

Here's the reality of multi-agent self-hosting:

Task1 agent5 agents
Initial setup3 hours8-12 hours
Port allocationTrivialTrack carefully
Reverse proxy1 config block5 config blocks + wildcard SSL
MonitoringCheck one logCheck five logs + health script
Updates1 command5 commands (staged)
Backups1 directory5 directories
Debugging1 container"Which container broke?"
Monthly maintenance2-3 hours6-10 hours

It's not that any single task is hard. It's that everything multiplies. And when agent 3 stops responding at midnight, you need to figure out if it's a port conflict, a resource limit, a webhook issue, or a config error β€” across five containers.


ClawPort handles port allocation, network isolation, SSL, monitoring, backups, and updates for every agent automatically. Add a new agent from the dashboard β€” plans from $10/month, deployed in seconds, fully isolated. Scale your agents β†’

Ready to deploy your AI agent?

Get started with ClawPort in 60 seconds. No credit card required.

Get Started Free