Potatoofdoom's My little space for projects and ideas.
|
|
|
|
2026-02-11 | ai openclaw infrastructure tailscale self-hosting agents
How I set up a distributed AI operations team using OpenClaw, Tailscale, and a lot of terminal tabs.
I've been building something weird and fun: a fleet of AI agents, each running on different machines across multiple countries, all coordinated through a single command structure. Here's how it works.
I run OpenClaw — an open-source AI agent framework that gives Claude (and other LLMs) persistent memory, tool access, and the ability to actually do things on your machine. It connects to messaging platforms like Telegram, Discord, and Signal, so you can talk to your agent like a teammate.
Most people run one instance. I'm running five — with two more on the way.
| Machine | Location | OS | Role |
|---|---|---|---|
| Mac mini | Los Angeles | macOS (Apple Silicon) | Hub — the "CTO" agent that orchestrates the fleet |
| Windows laptop | Mobile | Windows | Social media + stream operations |
| Cloud VPS #1 | DigitalOcean | Linux (x86) | DevOps, content, and website hosting |
| Cloud VPS #2 | DigitalOcean | Linux (x86) | Research + personal ops |
| Raspberry Pi | Southeast Asia | Linux (ARM) | Personal assistant — runs 24/7 on a $35 computer |
And coming soon:
| Machine | Location | OS | Role |
|---|---|---|---|
| GPU Desktop | East Asia | Windows + Nvidia GPU | Local model inference + video/audio rendering |
| MacBook | Mobile | macOS | Daily driver — unnamed, waiting for its soul |
Each machine runs its own OpenClaw instance with its own Telegram bot, its own personality (defined in a SOUL.md file), and its own area of responsibility.
Everything is connected via Tailscale — a mesh VPN that makes every machine addressable to every other machine, regardless of where it physically sits. A machine in one country can SSH into a machine in another as easily as if they were on the same desk.
The hub agent has SSH access to all remotes. It can:
This isn't theoretical — it's running right now. The hub runs a health check across all machines every 6 hours, monitoring uptime, disk usage, memory, and gateway status.
Each agent runs Claude as its primary model, but every instance is configured with a fallback chain:
This means the fleet keeps running even if one provider has an outage. The local Ollama instance is the ultimate fallback — no internet required. Once the GPU desktop is online, we'll have serious local inference power across the fleet.
OpenClaw uses a SOUL.md file to define each agent's personality and role. This isn't just flavor text — it genuinely changes how the agent operates:
Each agent maintains its own memory through markdown files:
When an agent wakes up (every session is a fresh start with LLMs), it reads these files to rebuild context. It's like a human reviewing their notes before starting work. The agents also maintain and prune their own memories during idle heartbeat cycles.
I message my bots on Telegram like I'd message a coworker:
The key insight: these aren't chatbots. They have file system access, shell access, web access, and persistent memory. They're more like junior employees who happen to live inside terminals.
Right now, the whole operation runs on about $200/month — mostly Claude API credits. The Raspberry Pi costs nothing to run. The VPS instances are cheap. Tailscale is free for personal use.
For what amounts to a 24/7 distributed operations team spanning three countries, that's absurdly cheap.
OpenClaw is open source: github.com/openclaw/openclaw
You don't need five machines. Start with one. Give it a soul. Let it surprise you.
This post was drafted by one agent and published to production by another via SSH. The first real output of a multi-agent content pipeline.