OpenClaw is a useful case study, but the lesson is broader: always-on agents with system access are a security category of their own. They are connected to untrusted inputs (email, web, chat), they act autonomously, and they run continuously. That combination makes isolation non-negotiable.
OpenClaw made the risk visible because it runs locally by default and encourages deep access. The real issue is the pattern: giving LLMs privileged access without respecting the blast radius.
The Core Failure Mode
The common failure mode is not a “bug.” It is prompt injection plus privileged access. An agent reads untrusted content, treats it like instructions, and uses its permissions to expose secrets or take actions you did not intend.
When you give an LLM access to external data (emails, web pages, Slack messages), you give attackers a surface to manipulate that data and, through it, your system.
Why Docker Is Not Enough
Docker helps with process isolation, but it is not a security boundary for an agent that needs file access.
Docker provides:
- Process isolation from the host
- Filesystem separation (until you mount host volumes)
- Network namespace isolation
Docker does not protect against:
- Volume mounts exposing host files (the common case for AI coding tools)
- Docker socket access, which grants full host control
- Privileged containers
- Kernel exploits or misconfigurations
If your agent needs to edit code, it needs volume mounts. If it has volume mounts, it can read your .env files, credentials, and browser data. Container escape is a secondary concern when you have already handed over the keys.
Isolation Levels That Match Reality
| Level | What It Is | Blast Radius | Use It When |
|---|---|---|---|
| 1 | Daily driver machine | Maximum | Never for autonomous agents |
| 2 | Docker with host mounts | High | Manual-only tools, no external inputs |
| 3 | Docker with no mounts | Medium | Inference-only, API servers, testing |
| 4 | VM on your machine | Low | Autonomous agents with external inputs |
| 5 | VPS or cloud instance | Lowest | Always-on agents and production workflows |
This ranking is about blast radius. It assumes your agent will eventually be tricked. The question is what it can reach when that happens.
VPS Beats Buying Local Hardware
We keep seeing OpenClaw users consider buying a Mac mini or a small desktop just to run agents. That is understandable, but it is rarely the safest option for always-on workloads.
If your agent needs to stay online, talk to external services, and run unattended, a $5 to $10 VPS is usually safer than new local hardware. A VPS isolates the agent from your personal files, browser sessions, password manager, and home network. If it gets compromised, you burn the server and rotate the keys.
Local hardware only makes sense when you need GPU inference, strict data residency, or offline execution. Even then, treat it like a server: separate accounts, separate secrets, and no personal data.
VPS quick-start
Use the OpenClaw safe setup guide as your baseline. A VPS-specific walkthrough is coming, but the isolation principles are the same.
Practical Recommendations
For Autonomous Agents
Use a VM at minimum. Prefer a VPS or cloud instance with no personal data, no SSO, and no shared credentials.
For AI Coding Assistants
Use a VM or dedicated machine if the tool has:
- Autonomous mode (runs without human confirmation)
- External integrations (Slack, email, webhooks)
- Shell command execution
Use Docker with caution if:
- You manually approve every action
- No external data sources are connected
- Volume mounts are limited to specific project directories (never home directory)
For Local LLM Inference
Docker or isolated containers are sufficient. Tools like Ollama don’t need host access for basic inference. The risk is in the agent capabilities, not the model itself.
For OpenClaw-Style Agents
Minimum: VM with snapshots. Better: VPS with separate API keys and a firewall. Best: dedicated instance with no personal accounts and a strict allow-list of tools.
Specific Hardening Steps
- Never mount your home directory. Mount only specific project folders, read-only where possible.
- Use .dockerignore aggressively. Prevent containers from reading sensitive files even if they escape their mount points.
- Run containers as non-root. Even if escaped, the attacker starts as an unprivileged user.
- Disable Docker socket access. Tools that need it can do anything on your host. Find alternatives.
- Use VM snapshots before enabling autonomous features. One click to revert if something goes wrong.
- Separate credentials. The VM/VPS should have its own API keys, not your personal ones. Limited scope, easy to revoke.
- Monitor network traffic. Unexpected outbound connections are your first warning sign.
- Use a default-deny firewall. Only allow outbound connections you actually need.
- Keep the agent off your home network. A VPS removes lateral movement into your personal devices.
The Convenience Trade-off
More isolation means more friction. Copying files in and out of VMs. Managing separate credentials. Slower context switching.
But the OpenClaw demonstrations showed exfiltration in minutes, not hours. An email arrives, the AI reads it, the attacker has your keys. The convenience of “always-on” access is the vulnerability.
For daily coding with confirmed actions: Docker with limited mounts is reasonable. For anything autonomous: VM minimum, VPS preferred. For production agents: Assume compromise and design for it.
Related links
- /posts/openclaw-security-reality-2026/
- /implement/openclaw/safe-setup/
- /risks/openclaw/architecture-risk/
- /risks/openclaw/fetch-and-follow/
- /verify/openclaw-claims/
Security practices evolve as fast as AI capabilities. This guide will be updated as new tools and attack patterns emerge.