As AI inference moves from centralized servers to the edge, it's fundamentally redistributing compute. Akamai's architecture was built for this: First, in how we pioneered the delivery of content and, now, in how we're leading the delivery of intelligence.
After more than two decades of experience, we’re building on that same driving principle for AI by bringing inference closer to where decisions get made. It’s the next step in an evolution that’s redefining the architecture of the cloud by extending its reach from centralized regions to the distributed edge.
IDC’s recent report, Akamai: Navigating the Cloud Frontier — A Transformation from CDN to Distributed Cloud Provider, captures that transformation. It traces how we’ve evolved from the web’s original delivery pioneer to a distributed cloud provider ready for an AI-driven future.
For leaders who are preparing their organizations for what’s next, the IDC profile is worth a close read. It demonstrates how Akamai has been building the kind of distributed foundation that the coming AI era will demand and how the lessons of our past now shape the architecture of our future.
Reinvention as a throughline
Akamai was born out of the World Wide Web’s first bottleneck.
In 1995, Tim Berners-Lee, the web’s inventor, challenged MIT researchers to find a faster, smarter way to deliver online content. Our co-founders, Dr. Tom Leighton and Danny Lewin, responded by creating the content delivery network (CDN) — a design that replicated and routed content across a distributed set of servers worldwide. It solved what was commonly called the “World Wide Wait.”
That same principle — proximity as performance — has guided us ever since. As the internet matured, we recognized that the same distributed design that sped up content delivery could also defend it. Long before Zero Trust entered the mainstream, we began extending our reach from delivery into security through strategic acquisitions like Prolexic, Soha Systems, and Guardicore.
Then, in 2022, our acquisition of Linode brought developer-friendly cloud computing into the fold. IDC calls that acquisition the “foundational” moment that redefined Akamai’s trajectory — a deliberate pivot into cloud infrastructure that lets us run compute workloads, not just deliver content.
In many ways, that was the logical next step in our evolution: The distributed network that once moved bits of content now also moves workloads of computation.
From delivery to distributed cloud
IDC describes Akamai’s business today in clear terms: three interconnected pillars — delivery, security, and cloud computing. Each reinforces the others, creating a unified architecture for performance, protection, and scalability.
That interconnectedness is what differentiates us from traditional hyperscale providers. Although most have centralized compute in a handful of massive regional data centers, we’ve been extending compute outward, integrating new core and distributed cloud sites directly into our existing edge network of more than 4,400 locations across 134 countries.
The result is a continuum of compute, spanning from core to edge, designed for low latency, high availability, and security that’s native rather than layered on. IDC identifies this architecture as Akamai’s defining advantage in the era of edge computing and AI inference, where milliseconds and proximity make all the difference.
The distributed advantage for AI
Every major technology wave reshapes infrastructure:
- The web pushed data to the edge for faster access.
- Mobile pushed compute closer to the device.
- Cloud recentralized workloads for scale.
And today, AI is decentralizing those workloads again for speed, privacy, and cost.
The shift from training to inference is no exception. Inference workloads demand compute that’s closer to users and data sources, not locked away in centralized regions. As IDC notes, this proximity “reduces latency and enhances real-time responsiveness for AI applications.”
That’s exactly what our Akamai Inference Cloud is built to do. By running inference on the edge, we can cut the cost and reduce the delay of transmitting data long distances for processing. It’s the continuation of our 25-year mission to bring computation closer to users.
It’s also a pragmatic response to the economics of AI.
Hyperscale training will always have its place, but as models move into production — powering chatbots, recommendation engines, video intelligence, and real-time analytics — inference efficiency becomes the real differentiator. Our globally distributed network gives us a natural edge: We can deliver those workloads faster, more affordably, and more securely than centralized architectures.
For any enterprise leader looking to operationalize AI at scale, that architectural difference matters.
Competing by being different
IDC’s profile underscores how Akamai is intentionally avoiding a head-on battle with hyperscalers. Rather than replicate their global regions and sprawling feature catalogs, we’re focusing on the areas where our heritage gives us an advantage: distributed performance, predictable pricing, and integrated security.
- Distributed performance: Our global footprint allows workloads to run closer to users, enabling single-digit millisecond latency for media, gaming, and AI inference.
- Predictable pricing: Our experience in content delivery informs aggressive egress pricing, helping customers avoid the spiraling data-transfer costs that often come with hyperscale clouds.
- Integrated security: Built-in protection — from distributed denial-of-service (DDoS) mitigation to Zero Trust Network Access (ZTNA) — travels with the compute itself.
IDC characterizes this as “a differentiated cloud experience focused on low-latency, cost-effective, and secure solutions.” In other words, we’re not trying to be the biggest cloud; we’re building the one that can get AI to places others can’t reach.
What’s next
If there’s a single thread that runs through Akamai’s history, it’s adaptation. Each wave of technology has turned our previous foundation into the springboard for the next.
We solved web congestion, then used that same distributed architecture to secure it. With the addition of cloud computing, we began running workloads closer to users — setting the stage for today’s work in powering and protecting AI.
IDC sees this as a key reason Akamai is “uniquely poised to capitalize on the explosive growth in edge computing and AI inference.” It’s validation that staying true to our distributed DNA continues to pay dividends.
The AI era won’t reward scale alone. It will favor infrastructure that can move intelligence to wherever it’s needed, as close as possible to the moment of interaction. That’s the agility we’ve been building toward all along.
Building for an “AI everywhere” future
AI represents a new computing paradigm. To deliver on its promise, organizations need infrastructure that’s as adaptable as the intelligence it supports. Our work across distributed cloud regions, developer experience, and integrated security is converging toward a single goal: creating an edge-first platform ready for that future.
IDC’s analysis doesn’t ignore the challenges ahead — the capital intensity of expansion, the perception gap versus hyperscalers, and the complexity of guiding enterprises through multicloud adoption. But it’s precisely our history of turning challenges into catalysts that makes this moment so exciting.
The report concludes that opportunities lie ahead in “AI inference at the edge, cloud cost optimization, and unified multicloud operations” — all areas where our distributed architecture provides a natural fit.
We’ve come full circle: from solving the “World Wide Wait” to helping the world solve for “AI everywhere.” And just as before, the solution starts by moving closer to the user.
Learn more
Read the full IDC Vendor Profile, Akamai: Navigating the Cloud Frontier, to explore how Akamai’s distributed cloud is shaping the infrastructure foundation for the next generation of AI applications.
Tags