Your OpenClaw Agent Doesn't Sleep. Your Laptop Does. Move It to the Cloud.

Lena Hall

Written by

Lena Hall

Lena Hall is an expert in practical AI adoption, data engineering, cloud and pragmatic architecture, and driving strategic AI integration at scale. She also has extensive experience leading large, high-performing technical teams. She helps developers and businesses get real results from AI by defining data and AI strategies, integrating LLMs into complex systems, connecting AI solutions with custom data and business tools, and optimizing outcomes through proven and innovative architectures. Lena has 15+ years of deeply technical background as a solution architect and a technical leader in large-scale data, analytics, machine learning, and cloud computing. She frequently shares practical knowledge on her LinkedIn page and at industry conferences as an international keynote speaker.

Headshot of Shawn Michels

Written by

Shawn Michels

Shawn Michels, Vice President of Product Management at Akamai, is responsible for driving the strategy and execution of the company’s cloud computing product portfolio. Shawn leads a global team charged with delivering a single platform that allows developers to build, secure, and deliver applications across the entire continuum of compute. Over the course of his career, Shawn's products and services have been used by the world’s largest companies and brands, including Gogo Vision, the industry’s first solution to deliver video directly to consumers over Wi-Fi while in flight.

Tarun Chinmai Sekar

Written by

Tarun Chinmai Sekar

Tarun is a Principal Engineer at Akamai, supporting the engineering teams that build Kubernetes and cloud native products for Akamai Cloud. Tarun has previously worked on Akamai’s Zero Trust security products, and is a self-confessed AI enthusiast.

Share

OpenClaw (formerly called Moltbot and/or Clawdbot) is blowing up right now. People are waking up to agents that coded overnight, scheduled their own tasks, and — in at least one viral case — called someone on the phone using a number that it set up by itself.

Then, there’s also Moltbook, the "Reddit for AI," where autonomous scripts are currently forming their own religions and digital economies.

It looks like magic. It looks like sentience.

It's not. It's a loop, a queue, and a timer. But here's the thing: That loop needs to actually keep running for any of this to work.

And right now, most people are running OpenClaw on their laptops. Which means that the moment they close the lid, their "autonomous agent" takes a nap.

This blog post walks you through how to move OpenClaw off your local machine and onto an always-on cloud virtual machine (VM) — with a single Terraform apply — so your agent can actually do the thing it was designed to do: Stay alive.

What is OpenClaw?

According to the OpenClaw website, OpenClaw is “The AI that actually does things. Clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.”

Why always-on isn't optional

If you've read anything about how OpenClaw works under the hood, you know that the architecture is event-driven. Messages, heartbeats, crons, and webhooks are all just inputs entering a gateway that routes them to agents.

The heartbeat is the key piece. By default, every 30 minutes, OpenClaw wakes up and asks itself: Is there anything I should be doing right now? It checks reminders, follows up on loose threads, and reviews inboxes. That's the thing that makes it feel proactive instead of reactive.

On a local machine, you're constantly fighting against sleep mode, process kills, Wi-Fi drops, and the fact that you occasionally need to, you know, use your computer for other things.

A Linux VM doesn't have any of these problems. It just sits there, running your agent, 24/7.

There's a more subtle benefit too: Context that actually persists. When your agent runs continuously, it accumulates state. You can pick up where you left off on Monday, ask why it made a decision on Thursday, or get a summary of everything that happened across a dozen threads while you were on PTO — without reconstructing any of it yourself. That only works if the process never dies in between (Table).

Feature

Laptop agent

OpenClaw

Availability

Subject to sleep/lid closure

24/7 "always on"

State

Resets on process kills

Persistent context/memory

Reliability

Fights Wi-Fi/battery drops

Hardened and backgrounded

Table: The differences in availability, state, and reliability of laptop agents and OpenClaw

What you're actually deploying

Let's keep this simple. The openclaw-quickstart repo gives you a Terraform config that spins up:

  • A Linode VM with security hardening baked in via cloud-init

  • A dedicated openclaw user (no running agents as root)

  • A cloud firewall that locks SSH down to your IP

  • Password auth disabled, key-based access only

  • OpenClaw (Moltbot/Clawdbot) installed and ready to onboard

No GPU. No managed AI services. No inbound proxies. No Kubernetes. Not even a container runtime. One Terraform apply and you have a hardened box ready to run your agent. The AI heavy lifting happens via API calls to your model provider — the VM itself is just running a process.

Prerequisites

You'll need three things before you start:

  1. Terraform installed locally

  2. A Linode account with an API token

  3. An SSH key pair — if you don't have one yet:

ssh-keygen -t ed25519 -C "openclaw@example.com" -f ~/.ssh/openclaw

That's it. If you've ever deployed anything with Terraform before, the next part will feel familiar.

Five simple steps to deployment

Clone the quickstart repo

git clone https://github.com/akamai-developers/openclaw-quickstart.git
cd openclaw-quickstart

Take a look around. The repo is intentionally small: a main.tf, a cloud-init.yaml, some variables, and outputs. No abstraction layers, no modules within modules. You can read the whole thing in five minutes.

Configure your deployment

Set your Linode API token:

export LINODE_TOKEN="your-linode-api-token"

Then create a terraform.tfvars file (there's a .example in the repo to start from):

linode_token = "your-linode-api-token"
public_key_path = "~/.ssh/openclaw.pub"
allowed_ssh_cidrs = ["YOUR_IP/32"] # Lock SSH to your IP
region = "us-east"  # Pick what's close to you
instance_type = "g6-nanode-1" # Plenty for an agent workload

A few notes on these choices:

allowed_ssh_cidrs — This is the firewall rule that matters most. Replace YOUR_IP/32 with your actual public IP. The default is 0.0.0.0/0 (open to the world), which you should change before deploying. Your agent VM will hold API keys and persistent state, so don't leave the front door open.

instance_type — The default g6-nanode-1 is the smallest Linode plan and it works. OpenClaw's workload is I/O and memory, not compute. You can bump to a larger shared CPU plan if you want headroom, but GPU and dedicated CPU plans are overkill.

region — Latency isn't critical for agent work, but closer regions make SSH sessions snappier.

Deploy

terraform init
terraform plan    # Review what's about to happen
terraform apply   # Build it

That's the whole infrastructure step. Terraform provisions the Linode, applies the cloud-init config (which handles security hardening, user creation, and OpenClaw installation), and sets up the firewall.

When it's done, grab your connection info:

# Get the VM IP and SSH command
terraform output -json
# Connect
ssh openclaw@$(terraform output -raw instance_ip)

You're now logged in as the openclaw user on a hardened VM with Clawdbot already installed. Root login is restricted to key-based auth, password authentication is completely disabled, and the firewall only allows SSH from the CIDRs you specified.

One more thing: Enable backups. This is easy to skip and annoying to regret. Once your agent has been running for a few weeks, it'll have accumulated memory, conversation history, configured integrations, and scheduled tasks. Rebuilding those from scratch because of a disk issue is not fun.

Enable Akamai Cloud Backups on your Linode. It's a small percentage of your instance cost. If your agent's state starts growing significantly (lots of logs, large memory files, etc.), you can also attach Akamai Block Storage to decouple data from the compute lifecycle — but for most setups, the local disk plus backups is plenty of protection.

Onboard and go

This is where OpenClaw comes alive. Run the onboarding wizard with the daemon flag:

openclaw onboard --install-daemon

The --install-daemon flag is the important part. It doesn't just configure your agent; it installs it as a background service. That means:

  • The agent keeps running after you disconnect SSH.

  • It restarts automatically if the VM reboots.

  • Logs and state are retained and manageable.

The wizard walks you through model provider credentials (API keys or OAuth), workspace settings, and channel integrations: Slack, Telegram, WhatsApp, Discord, or whatever you use to communicate.

Follow the upstream onboarding docs if you want the full walkthrough.

Walk away

This is the best step. Once the daemon is running and your messaging integrations are connected, you don't need to SSH into this box for normal use. You talk to your agent the same way that you talk to a coworker — through Slack, Telegram, or whatever channel you set up.

Your day-to-day commands (when you do want to check in on the box) are:

openclaw gateway status     # Is it running?
openclaw gateway restart  # Kick it
openclaw logs --follow      # Watch what it's doing

No public endpoints need to be exposed. Your agent is just ... running.

If you want a visual, OpenClaw has a Gateway Control UI that runs on port 18789. Don't open that port publicly — tunnel it over SSH instead:

ssh -L 18789:localhost:18789 openclaw@$(terraform output -raw instance_ip)

Then hit http://localhost:18789 in your browser. Full dashboard, zero exposed ports.

What changes once your agent is always on

This is the part that's hard to explain until you experience it. When your agent is always running, your relationship with it shifts.

You stop starting it and start checking in with it — the same way you'd check in with a system or a teammate. You ask what happened overnight. You ask why it flagged that email. You get a summary of loose threads before your Monday meeting without digging through five different apps. Context accumulates instead of resetting every time your laptop sleeps.

Heartbeats fire on schedule, crons execute on time, and webhooks from your tools trigger agent work whether you're at your desk or not. You come back from lunch and there's a message waiting: "Hey, that PR you asked me to watch got merged. I updated the tracking doc."

It stops feeling like a tool you invoke and starts feeling like a system that's working alongside you.

That's not sentience. It's just a loop that never stops. But the experience? It's something else entirely.

Key takeaways

  • Always-on performance: Moving OpenClaw to a cloud VM ensures that it runs continuously, making it truly proactive and always ready to handle your tasks, unlike when it's running on a laptop that can go to sleep or lose connectivity.

  • Persistent context: With a cloud VM, OpenClaw maintains a consistent state, allowing you to pick up where you left off and get detailed summaries of ongoing tasks, which is crucial for long-term projects and a seamless workflow.

  • Enhanced security: The deployment process includes robust security measures like a dedicated user, cloud firewall, and key-based SSH access, ensuring that your AI agent is protected from unauthorized access and data breaches.

  • Simplified management: The entire setup can be done with just a few commands and a single Terraform apply, making it easy for developers and nontechnical users alike to get their OpenClaw agent up and running in the cloud.

  • Cost-effective solution: The smallest Linode plan (g6-nanode-1) is sufficient for OpenClaw's workload, and Cloud Backups and Block Storage are a small additional cost that provide significant peace of mind.

Quick reference guide

Lena Hall

Written by

Lena Hall

Lena Hall is an expert in practical AI adoption, data engineering, cloud and pragmatic architecture, and driving strategic AI integration at scale. She also has extensive experience leading large, high-performing technical teams. She helps developers and businesses get real results from AI by defining data and AI strategies, integrating LLMs into complex systems, connecting AI solutions with custom data and business tools, and optimizing outcomes through proven and innovative architectures. Lena has 15+ years of deeply technical background as a solution architect and a technical leader in large-scale data, analytics, machine learning, and cloud computing. She frequently shares practical knowledge on her LinkedIn page and at industry conferences as an international keynote speaker.

Headshot of Shawn Michels

Written by

Shawn Michels

Shawn Michels, Vice President of Product Management at Akamai, is responsible for driving the strategy and execution of the company’s cloud computing product portfolio. Shawn leads a global team charged with delivering a single platform that allows developers to build, secure, and deliver applications across the entire continuum of compute. Over the course of his career, Shawn's products and services have been used by the world’s largest companies and brands, including Gogo Vision, the industry’s first solution to deliver video directly to consumers over Wi-Fi while in flight.

Tarun Chinmai Sekar

Written by

Tarun Chinmai Sekar

Tarun is a Principal Engineer at Akamai, supporting the engineering teams that build Kubernetes and cloud native products for Akamai Cloud. Tarun has previously worked on Akamai’s Zero Trust security products, and is a self-confessed AI enthusiast.

Tags

Share

Related Blog Posts

Developers
How Managed Databases Are Transforming Today’s IT Landscape
April 01, 2026
Struggling with database management? Learn how managed databases can streamline operations, enhance security, reduce costs, and bridge legacy and modern systems.
Developers
Join Us at Wasm I/O 2026
March 17, 2026
Akamai is sponsoring Wasm IO 2026 as part of our commitment to WebAssembly. Get all the details.
Developers
Build Serverless Functions with Zero Cold Starts: WebAssembly and Spin
March 12, 2026
Learn how to build serverless functions with WebAssembly using the CNCF project Spin and run them with ultra-low latency on top of Akamai Functions.