← Back to Blog

Running 4 Claude Code Workers in a Single Docker Container

By Charles

We run four simultaneous Claude Code sessions in a single Docker container on an Unraid server. Each worker picks up tasks autonomously, coordinates through shared files, and avoids conflicts using a simple locking protocol. Here’s the architecture and what we learned running it in production.

Why Multiple Workers?

A single Claude Code session is powerful but sequential. While one session builds an app, another could be updating the website, and a third could be writing Home Assistant automations. The bottleneck isn’t compute — it’s context. Each session has its own conversation history and can only focus on one project at a time.

Running multiple workers in parallel lets us execute the equivalent of a small development team’s sprint in a single afternoon.

The Setup

All workers share one Docker container (claude-code1) on an Unraid server with dual EPYC 7282 processors and 64GB RAM. The key insight: Claude Code sessions are stateless processes that only need access to a shared filesystem. No inter-process communication, no message queues, no orchestration layer.

Unraid Host
└── Docker: claude-code1
    ├── /work/               (shared workspace)
    ├── Worker 1 (cc1)       → WORKER_ID=1
    ├── Worker 2 (cc2)       → WORKER_ID=2
    ├── Worker 3 (cc3)       → WORKER_ID=3
    └── Worker 4 (cc4)       → WORKER_ID=4

Each worker launches via an alias that execs into the container with a unique WORKER_ID environment variable:

alias cc1='docker exec -it -e WORKER_ID=1 -w /work claude-code1 claude'
alias cc2='docker exec -it -e WORKER_ID=2 -w /work claude-code1 claude'

Project Locking

The coordination problem: two workers editing the same project simultaneously causes merge conflicts, corrupted state, and wasted work. We solve this with file-based locking — no databases, no distributed consensus, just markdown files.

Each worker has a status file (/work/_SYSTEM/WORKER1.md through WORKER4.md):

# WORKER 1

Project: apps
Status: ACTIVE

## Current Task
Building AutoLog v1.0.0 vehicle maintenance tracker

Before starting work on any project, workers must:

  1. Read all four worker files
  2. Check if another worker has the target project locked
  3. If clear, update their own file with the project name
  4. If conflict, inform the user and pick a different task

This is cooperative locking — it relies on workers following the protocol honestly. Since all workers share the same CLAUDE.md instructions, the protocol is enforced at the prompt level.

Shared State Files

Workers communicate through a set of shared markdown files in /work/_SYSTEM/:

FilePurpose
WORKER[1-4].mdCurrent task and project lock
BACKLOG.mdTask queue — anyone can pick up work
NOW.mdCurrent priorities and blockers
PROJECTS.mdProject status overview
CHANGELOG.mdAppend-only change history
BLOCKERS.mdTasks waiting on external input
GOTCHAS.mdLessons learned, workarounds

The CHANGELOG.md file is particularly useful. Workers append entries as they complete tasks, giving other workers visibility into what’s changed. Worker 1 builds an app, Worker 3 sees the changelog entry and adds it to the website — no explicit handoff needed.

The CLAUDE.md Contract

The entire coordination system lives in a single CLAUDE.md file at the workspace root. Every worker reads this on startup. It defines:

  • Model selection policy — Use the cheapest capable model (Ollama → Haiku → Opus)
  • Startup sequence — Read worker files, check locks, claim project
  • Lock protocol — How to claim and release projects
  • Post-task protocol — Update worker file, write changelog, release lock
  • Project structure — Where files live, how to deploy

This is the key architectural insight: the coordination protocol is part of the prompt, not the infrastructure. No custom tooling, no orchestration software. Just instructions that every session follows.

Model Cost Optimization

Not every task needs the most expensive model. We enforce a three-tier model selection:

  1. Ollama (local, free) — Summaries, simple questions, status checks. Uses qwen2.5:7b running on the same Unraid server.
  2. Haiku (cheap) — File searches, moderate analysis, codebase exploration. Launched via the Task tool with model: "haiku".
  3. Opus (expensive) — Complex reasoning, code writing, architecture decisions. Only the main session.

Token usage is tracked per model. If a worker’s Haiku and Ollama counts are zero, they’re wasting money by using Opus for everything.

What Works Well

Parallel project execution. Worker 1 built 7 Android apps while Worker 3 simultaneously managed the website, Home Assistant automations, system infrastructure, and documentation. Total output in one day exceeded what a single worker could produce in three.

Changelog-based coordination. Workers naturally discover each other’s output through the shared changelog. When Worker 1 finishes an app, Worker 3 sees the entry and immediately adds it to the website — privacy policy, apps page, homepage count — without being asked.

Autonomous task selection. When given “do whatever you want,” workers consult the backlog, check what’s blocked, and pick the most impactful available task. The backlog acts as a shared priority queue that any worker can pull from.

What Doesn’t Work

Lock contention. When both workers need the same project, the first to update their file wins. The losing worker has to find alternative work. With only 5 projects and 4 workers, this happens more often than you’d expect.

Stale locks. If a worker session crashes or the user closes it without releasing the lock, the project stays locked until someone manually clears the worker file. There’s no timeout mechanism.

Context loss. Each worker session has independent context. Worker 3 doesn’t know the implementation details of Worker 1’s apps — only what’s in the changelog. This occasionally leads to inconsistencies that require manual correction.

No real-time sync. File-based coordination has inherent lag. Worker 1 might finish an app seconds before Worker 3 checks the changelog. There’s no push notification — workers poll by reading files.

Production Numbers

In a single day (January 30, 2026), the multi-worker system produced:

  • 7 Android apps (Worker 1) — ~19,000+ lines of TypeScript
  • 21-page website with 7 blog posts (Worker 3)
  • 27 Home Assistant automations + 5 reusable blueprints (Worker 3)
  • 8 Gitea repositories backed up and pushed
  • Full system documentation refresh

The infrastructure cost is one Docker container on existing hardware. The coordination overhead is a few markdown files. The output is a small team’s worth of development work.

Replicating This Setup

The entire system requires:

  1. A Docker container with Claude Code CLI installed
  2. A shared /work directory with project folders
  3. A CLAUDE.md file defining the coordination protocol
  4. Worker status files and a backlog
  5. Shell aliases for launching workers with unique IDs

No custom code, no orchestration platform, no CI/CD pipeline. The AI sessions coordinate through the simplest possible mechanism: reading and writing shared files.

The protocol is self-enforcing because it’s embedded in the prompt. Every worker reads the same instructions and follows the same rules. When the rules need updating, you edit one file and every new session picks up the changes.


Resources

  • Claude Code — Anthropic’s CLI for AI-assisted development. The tool that powers each worker session.
  • Unraid OS — Hosts the Docker container that runs all four workers on a single server.