OpenClaw (formerly called Moltbot and/or Clawdbot) is blowing up right now. People are waking up to agents that coded overnight, scheduled their own tasks, and — in at least one viral case — called someone on the phone using a number that it set up by itself.
Then, there’s also Moltbook, the "Reddit for AI," where autonomous scripts are currently forming their own religions and digital economies.
It looks like magic. It looks like sentience.
It's not. It's a loop, a queue, and a timer. But here's the thing: That loop needs to actually keep running for any of this to work.
And right now, most people are running OpenClaw on their laptops. Which means that the moment they close the lid, their "autonomous agent" takes a nap.
This blog post walks you through how to move OpenClaw off your local machine and onto an always-on cloud virtual machine (VM) — with a single Terraform apply — so your agent can actually do the thing it was designed to do: Stay alive.
What is OpenClaw?
According to the OpenClaw website, OpenClaw is “The AI that actually does things. Clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.”
Why always-on isn't optional
If you've read anything about how OpenClaw works under the hood, you know that the architecture is event-driven. Messages, heartbeats, crons, and webhooks are all just inputs entering a gateway that routes them to agents.
The heartbeat is the key piece. By default, every 30 minutes, OpenClaw wakes up and asks itself: Is there anything I should be doing right now? It checks reminders, follows up on loose threads, and reviews inboxes. That's the thing that makes it feel proactive instead of reactive.
On a local machine, you're constantly fighting against sleep mode, process kills, Wi-Fi drops, and the fact that you occasionally need to, you know, use your computer for other things.
A Linux VM doesn't have any of these problems. It just sits there, running your agent, 24/7.
There's a more subtle benefit too: Context that actually persists. When your agent runs continuously, it accumulates state. You can pick up where you left off on Monday, ask why it made a decision on Thursday, or get a summary of everything that happened across a dozen threads while you were on PTO — without reconstructing any of it yourself. That only works if the process never dies in between (Table).
Feature |
Laptop agent |
OpenClaw |
|---|---|---|
Availability |
Subject to sleep/lid closure |
24/7 "always on" |
State |
Resets on process kills |
Persistent context/memory |
Reliability |
Fights Wi-Fi/battery drops |
Hardened and backgrounded |
Table: The differences in availability, state, and reliability of laptop agents and OpenClaw
What you're actually deploying
Let's keep this simple. The openclaw-quickstart repo gives you a Terraform config that spins up:
A Linode VM with security hardening baked in via cloud-init
A dedicated openclaw user (no running agents as root)
A cloud firewall that locks SSH down to your IP
Password auth disabled, key-based access only
OpenClaw (Moltbot/Clawdbot) installed and ready to onboard
No GPU. No managed AI services. No inbound proxies. No Kubernetes. Not even a container runtime. One Terraform apply and you have a hardened box ready to run your agent. The AI heavy lifting happens via API calls to your model provider — the VM itself is just running a process.
Prerequisites
You'll need three things before you start:
Terraform installed locally
A Linode account with an API token
An SSH key pair — if you don't have one yet:
ssh-keygen -t ed25519 -C "openclaw@example.com" -f ~/.ssh/openclaw
That's it. If you've ever deployed anything with Terraform before, the next part will feel familiar.
Five simple steps to deployment
There are just five simple steps to follow:
Clone the quickstart repo
git clone https://github.com/akamai-developers/openclaw-quickstart.git
cd openclaw-quickstart
Take a look around. The repo is intentionally small: a main.tf, a cloud-init.yaml, some variables, and outputs. No abstraction layers, no modules within modules. You can read the whole thing in five minutes.
Configure your deployment
Set your Linode API token:
export LINODE_TOKEN="your-linode-api-token"
Then create a terraform.tfvars file (there's a .example in the repo to start from):
linode_token = "your-linode-api-token"
public_key_path = "~/.ssh/openclaw.pub"
allowed_ssh_cidrs = ["YOUR_IP/32"] # Lock SSH to your IP
region = "us-east" # Pick what's close to you
instance_type = "g6-nanode-1" # Plenty for an agent workload
A few notes on these choices:
allowed_ssh_cidrs — This is the firewall rule that matters most. Replace YOUR_IP/32 with your actual public IP. The default is 0.0.0.0/0 (open to the world), which you should change before deploying. Your agent VM will hold API keys and persistent state, so don't leave the front door open.
instance_type — The default g6-nanode-1 is the smallest Linode plan and it works. OpenClaw's workload is I/O and memory, not compute. You can bump to a larger shared CPU plan if you want headroom, but GPU and dedicated CPU plans are overkill.
region — Latency isn't critical for agent work, but closer regions make SSH sessions snappier.
Deploy
terraform init
terraform plan # Review what's about to happen
terraform apply # Build it
That's the whole infrastructure step. Terraform provisions the Linode, applies the cloud-init config (which handles security hardening, user creation, and OpenClaw installation), and sets up the firewall.
When it's done, grab your connection info:
# Get the VM IP and SSH command
terraform output -json
# Connect
ssh openclaw@$(terraform output -raw instance_ip)
You're now logged in as the openclaw user on a hardened VM with Clawdbot already installed. Root login is restricted to key-based auth, password authentication is completely disabled, and the firewall only allows SSH from the CIDRs you specified.
One more thing: Enable backups. This is easy to skip and annoying to regret. Once your agent has been running for a few weeks, it'll have accumulated memory, conversation history, configured integrations, and scheduled tasks. Rebuilding those from scratch because of a disk issue is not fun.
Enable Akamai Cloud Backups on your Linode. It's a small percentage of your instance cost. If your agent's state starts growing significantly (lots of logs, large memory files, etc.), you can also attach Akamai Block Storage to decouple data from the compute lifecycle — but for most setups, the local disk plus backups is plenty of protection.
Onboard and go
This is where OpenClaw comes alive. Run the onboarding wizard with the daemon flag:
openclaw onboard --install-daemon
The --install-daemon flag is the important part. It doesn't just configure your agent; it installs it as a background service. That means:
The agent keeps running after you disconnect SSH.
It restarts automatically if the VM reboots.
Logs and state are retained and manageable.
The wizard walks you through model provider credentials (API keys or OAuth), workspace settings, and channel integrations: Slack, Telegram, WhatsApp, Discord, or whatever you use to communicate.
Follow the upstream onboarding docs if you want the full walkthrough.
Walk away
This is the best step. Once the daemon is running and your messaging integrations are connected, you don't need to SSH into this box for normal use. You talk to your agent the same way that you talk to a coworker — through Slack, Telegram, or whatever channel you set up.
Your day-to-day commands (when you do want to check in on the box) are:
openclaw gateway status # Is it running?
openclaw gateway restart # Kick it
openclaw logs --follow # Watch what it's doing
No public endpoints need to be exposed. Your agent is just ... running.
If you want a visual, OpenClaw has a Gateway Control UI that runs on port 18789. Don't open that port publicly — tunnel it over SSH instead:
ssh -L 18789:localhost:18789 openclaw@$(terraform output -raw instance_ip)
Then hit http://localhost:18789 in your browser. Full dashboard, zero exposed ports.
What changes once your agent is always on
This is the part that's hard to explain until you experience it. When your agent is always running, your relationship with it shifts.
You stop starting it and start checking in with it — the same way you'd check in with a system or a teammate. You ask what happened overnight. You ask why it flagged that email. You get a summary of loose threads before your Monday meeting without digging through five different apps. Context accumulates instead of resetting every time your laptop sleeps.
Heartbeats fire on schedule, crons execute on time, and webhooks from your tools trigger agent work whether you're at your desk or not. You come back from lunch and there's a message waiting: "Hey, that PR you asked me to watch got merged. I updated the tracking doc."
It stops feeling like a tool you invoke and starts feeling like a system that's working alongside you.
That's not sentience. It's just a loop that never stops. But the experience? It's something else entirely.
Key takeaways
Always-on performance: Moving OpenClaw to a cloud VM ensures that it runs continuously, making it truly proactive and always ready to handle your tasks, unlike when it's running on a laptop that can go to sleep or lose connectivity.
Persistent context: With a cloud VM, OpenClaw maintains a consistent state, allowing you to pick up where you left off and get detailed summaries of ongoing tasks, which is crucial for long-term projects and a seamless workflow.
Enhanced security: The deployment process includes robust security measures like a dedicated user, cloud firewall, and key-based SSH access, ensuring that your AI agent is protected from unauthorized access and data breaches.
Simplified management: The entire setup can be done with just a few commands and a single Terraform apply, making it easy for developers and nontechnical users alike to get their OpenClaw agent up and running in the cloud.
Cost-effective solution: The smallest Linode plan (g6-nanode-1) is sufficient for OpenClaw's workload, and Cloud Backups and Block Storage are a small additional cost that provide significant peace of mind.
Quick reference guide
Tags