CPU on Akamai Cloud: Choose Shared, Dedicated, or High Memory

Match your workload to the right compute plan with predictable pricing, native private networking, and 99.99% uptime. Deploy in minutes through Cloud Manager, API, or CLI and resize as you scale.

See how to choose a compute plan

Quick picker

Plans, fit, and key details

Dedicated CPU

Guaranteed, competition‑free cores for consistently high performance. Ideal for nearly all production apps that need predictable compute.

Shared CPU

Our most affordable VMs with a strong price‑to‑performance ratio. CPU cores are shared; short bursts to 100% are fine, but sustained usage should average under 80%.

High Memory

Memory‑optimized instances on dedicated CPUs for in‑memory workloads.

Pricing and transfer at a glance

If you’re comparing against hyperscalers like Azure, many customers find total costs lower with Akamai due to flat, predictable instance pricing and low egress fees. For a side‑by‑side projection, use the Cloud Cost Calculator.

Distributed compute: core, metro, and edge

Distributed compute places dedicated CPU instances in major metro locations to push latency‑sensitive services closer to users—especially where traditional cloud regions are distant or limited.

AI workloads on Akamai GPU

Use GPU‑equipped instances for parallelized, accelerator‑friendly AI/ML work:

Best‑fit AI scenarios include LLM and VLM inference, real‑time RAG services, vision workloads (detection, tracking, OCR), speech‑to‑text/text‑to‑speech, and GPU‑accelerated data preprocessing. See GPU product details and GPU plans and pricing.

Note: Akamai Accelerated Compute uses NETINT VPUs and is purpose‑built for video transcoding pipelines. For AI acceleration, choose GPU plans.

Shared vs. Dedicated CPU for enterprise workloads

Comparisons and buyer guidance

Setup: from zero to a running instance

  1. Create an account and choose your region. Sign up
  2. Pick a plan (Shared, Dedicated, or High Memory) and size. How to choose
  3. Create your instance in Cloud Manager, API, or CLI. Create a compute instance
  4. Secure networking: set up VPC and Cloud Firewalls. Get started with VPC and Cloud Firewalls
  5. Enable backups and snapshots. Backups service
  6. Attach storage if needed. Block Storage
  7. Monitor and alert. Use Cloud Pulse metrics and Alerts. Monitoring

Reference architecture: distributed compute on Akamai Cloud

RFP criteria and KPIs for a distributed compute platform

KPIs to track post‑deployment: p95/p99 latency, error rates, CPU steal/ready time (should be near zero on Dedicated), throughput per vCPU, cost per request/session/GB, and egress cost per GB.

Operational playbook for performance, uptime, and cost SLOs

Next steps

Explore adjacent options: - GPUs for AI/ML and visualization. GPU product details - Accelerated Compute for media transcoding with VPUs. Accelerated Compute