Anthropic's launch this week of Claude Managed Agents was perceived as a direct threat to infrastructure companies, and triggered significant market commentary about the potential impact on companies like Akamai. I understand that when a model provider, with the world-changing impact that Anthropic has, announces a hosted service, the instinct is to assume the infrastructure layer gets disintermediated. But that interpretation misses what Anthropic built and what it actually needs to work at enterprise scale.
Read the engineering blog post carefully. Managed Agents is a hosted service that runs long-horizon agents on your behalf through a small set of interfaces meant to outlast any particular implementation. Anthropic virtualized the components of an agent (session, harness, and sandbox) so that each can be swapped independently. They designed for a future they explicitly acknowledge they cannot predict: the challenge they faced is "how to design a system for programs as yet unthought of."
This is not a replacement for infrastructure. It is a massive new source of demand for it.
AI inference and Managed Agents demand distributed compute
Anthropic's engineering team described a fundamental design decision: decoupling the "brain" (Claude and its harness) from the "hands" (sandboxes and tools that perform actions). This decoupling enables what they call "many brains, many hands." A single agent session can spawn multiple inference calls and multiple execution environments, running in parallel, across different locations and resources.
The critical insight to understand: Anthropic makes no assumptions about the number or location of brains or hands that Claude will need. That is not a throwaway architectural comment. It is a statement that the infrastructure layer — where inference happens, where sandboxes run, and where tools execute — is deliberately left open. Anthropic built the orchestration. The compute fabric underneath is exactly what companies like Akamai provide.
This maps directly to the distributed inference architecture we presented at NVIDIA GTC in San Jose earlier this year. Our thesis, that production AI inference, especially agentic workflows requiring low-latency responses, multi-step reasoning, and real-time tool calls, demands a different approach from centralized AI Factories, is precisely what Managed Agents requires to function at enterprise scale.
AI Factories excel at foundational model training and large-scale concurrent GPU workloads. But when a Managed Agent spawns five parallel "brains" that each need to call tools, execute code, and return results within the latency constraints of a real-time application (remember, Anthropic recognizes they are designing for “programs yet unthought of”), you need GPUs distributed across geographies, connected by high-throughput network fabric, with security enforcement at every layer.
That is Akamai Inference Cloud. Our continuum of compute resources — from centralized GPU clusters for training and fine-tuning, to distributed NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs for production inference, to edge points of presence for routing, caching, and security — is purpose-built for exactly this workload pattern.
You cannot outrun the speed of light: A user in London hitting a Virginia-based inference endpoint incurs approximately 28 milliseconds of propagation delay each way before a single token is generated. Multiply by the sequential inference calls in an agentic workflow and centralized inference becomes unusable for real-time applications.
This is not a replacement for infrastructure. It is a massive new source of demand for it.
Security is not optional: It is the hardest unsolved problem
Anthropic's engineering team was remarkably candid about the security challenges of Managed Agents. In their coupled design, any untrusted code that Claude generated ran in the same container as credentials, so a prompt injection only had to convince Claude to read its own environment. Once an attacker had those tokens, they could spawn fresh, unrestricted sessions and delegate work to Managed Agents.
The team’s structural fix was to separate the credentials from the sandbox. But they also acknowledged the fundamental tension: Narrow scoping is an obvious mitigation, but this encodes an assumption about what Claude can't do with a limited token, and Claude is getting increasingly smart.
This is where we believe Akamai's security portfolio becomes not just relevant but essential. When enterprises deploy Managed Agents in production, agents that autonomously execute multi-step tasks, call external APIs, generate and run code, and interact with corporate systems expand the attack surface dramatically. Prompt injection, adversarial code generation, credential theft, lateral movement, and data exfiltration become real operational risks at a scale that the "programs as yet unthought of" will inherit.
Akamai Firewall for AI protects large language model (LLM) endpoints from prompt injection and model abuse. Our API Security discovers, maps, and monitors the API calls that agents make to external tools and services. Our web application firewall solution inspects the traffic between agents and the applications they interact with. Our Akamai Guardicore Segmentation limits what an agent (or a compromised agent) can reach inside an enterprise network. And our Bot Manager distinguishes legitimate agent traffic from adversarial automation that mimics agent behavior.
No model provider, including Anthropic, is likely to build all of these capabilities; they are building the orchestration layer. The runtime protection, the network security, and the infrastructure enforcement are what companies like Akamai provide, and what enterprises will require before they deploy autonomous agents against production systems that handle customer data, financial transactions, and critical operations.
AWS did not eliminate the need for CDNs, security, and edge computing. It created orders of magnitude more applications, more traffic, and more attack surface, all of which drove demand for the services Akamai provides.
This is a growth driver, not a displacement
The market's initial reaction treated Managed Agents as if Anthropic had announced a replacement for CDN, cloud, and security infrastructure. We believe the opposite is true.
Every Managed Agent session that runs in production requires inference compute (demand for Akamai Inference Cloud); network connectivity and routing (demand for Akamai's edge platform and backbone); security enforcement at the application, API, and network layers (demand for Akamai's security portfolio); and orchestration across distributed locations (demand for the continuum of compute we are building).
The better analogy is what happened when cloud computing emerged. AWS did not eliminate the need for CDNs, security, and edge computing. It created orders of magnitude more applications, more traffic, and more attack surface, all of which drove demand for the services Akamai provides. We expect Managed Agents to follow the same pattern. More agents mean more inference calls, more API interactions, more distributed compute requirements, and more security exposure. We intend to be the infrastructure that powers them.
The bottom line
In the span of a single week, two announcements from the same company (Project Glasswing and Claude Managed Agents) illustrated both sides of the AI security and infrastructure equation. Glasswing makes vulnerability discovery faster, which amplifies the need for runtime protection during the gap between disclosure and remediation. Managed Agents makes autonomous AI workflows possible at scale, which amplifies the need for distributed inference infrastructure and security enforcement at every layer of the stack.
Both announcements strengthen the case for what Akamai builds. Our network, our data, our security portfolio, and our distributed compute platform are not threatened by these developments. We believe that they are required by them. Every new vulnerability discovered needs to be protected against in production until a patch is deployed. Every new agent deployed in production needs inference compute, network fabric, and security controls that the model provider does not provide — but Akamai does.
Akamai's edge network, runtime enforcement capabilities, and distributed inference infrastructure are designed to be the bridge between what AI enables and what enterprises need to deploy AI safely and at scale. That bridge has never been more critical than it is today.
Forward-looking statements
This blog post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements include, but are not limited to, statements regarding the expected demand for Akamai’s distributed infrastructure and security products driven by AI agent deployments, the competitive positioning of Akamai Inference Cloud and security portfolio, the anticipated growth in enterprise adoption of managed AI agents, and our plans and strategies for product development and market positioning. Words such as “believe,” “will,” “expect,” “intend,” and similar expressions are intended to identify forward-looking statements. These statements are based on current expectations and assumptions and are subject to risks and uncertainties that could cause actual results to differ materially, including: the pace and scale of enterprise adoption of AI agent technologies; changes in the competitive landscape for AI infrastructure and security services; the rate of AI capability development by third parties; customer adoption rates for Akamai Inference Cloud and security products; the effectiveness of our products against evolving AI-related security threats; general economic and market conditions; and other factors described in our SEC filings, including our most recent Annual Report on Form 10-K. We undertake no obligation to update any forward-looking statement to reflect events or circumstances after the date of this post.
Tags