Akamai recently commissioned a study by Forrester, The State of Enterprise AI: Gaining Experience and Managing Risk, to better understand how companies are adopting artificial intelligence (AI) and what they are prioritizing.
We gained some useful insights from that research. For example, 76% of organizations are adopting AI solutions to improve customer experience (CX) and operational efficiency, 71% view customer retention as another leading motivator, and when asked how they measure AI success, respondents pointed to improved CX (75%) and increased revenue (74%), highlighting how tightly customer satisfaction is tied to growth.
But, arguably, the most important thing we learned is that companies are achieving success with AI with a phased adoption pattern.
Enterprise AI adoption is at an inflection point
As Enterprise AI adoption reaches a convergence of growth in scale, maturity, and technical ambition, it is now at an inflection point. Companies are beginning to plan their shift from early adoption to an organization-wide AI rollout, which requires scalable, edge AI setups for optimization.
To prepare sufficiently for this shift, the companies at the forefront of AI adoption are doing two things: They are developing a foundation of low-risk, high-reward AI applications, and they are preparing for more complex AI use in the future by experimenting with the technology today.
The next wave of AI applications will be more compute intensive and more globally distributed with fast data processing. Imagine real-time language translation at scale during live global events, customer service calls, or in a multiplayer gaming chat — all without downtime. Or think about AI-powered visual search and object recognition that would allow shoppers to snap a photo of a product to find similar items instantly while in a retail store.
So, the question for technical leaders today is: How do you scale AI in a way that delivers real-time performance, adapts to unpredictable demand, and meets compliance requirements in every region?
Shift to edge native infrastructure to prepare for the future
Companies should consider moving over to an edge native execution model now so that they have the foundation to handle more complex edge AI use cases in the future. Even today, real-world use cases in customer-facing AI are demanding.
Applications such as chatbots, product recommendations, or voice-driven assistants are all latency bound. A few hundred milliseconds can be the difference between delight and frustration. These workloads are also bursty — they spike during flash sales, media events, or viral campaigns. And because they often involve sensitive customer data, they require strict control over where and how information moves.
Traditional cloud models struggle to keep pace with these hurdles. On the other hand, edge native architectures bring computation closer to the user. That means lower latency to protect customer satisfaction, regional deployment that aligns with global ambitions and regulatory rules, and the ability to scale in a way that can absorb sudden traffic peaks without runaway costs.
Shifting to edge native infrastructure for AI adoption can not only get companies through the rest of the early-adoption phase of the AI tech wave, but also prepare them for the future.
Use cases: From theory to practice with AI at the edge
Use case 1: Automated customer service resolution
Consider automated customer service resolution, one of the top enterprise use cases identified in the Forrester study. Many organizations still rely on human agents to handle large volumes of routine requests, which creates bottlenecks. With an edge native approach, incoming questions can be sorted and triaged directly at the edge. The right requests flow to the right systems with security policies enforced before they ever touch back-end infrastructure.
Lightweight AI models running on Linode Kubernetes Engine (LKE) generate instant, streaming responses, often by pulling data from managed databases or cached content for speed and accuracy. The results are faster response times, lower escalation rates, and higher customer satisfaction. More than half of organizations are already implementing automated resolution, with nearly one-third ranking it as their most critical AI use case.
Use case 2: Personalized recommendations
A second example is personalized recommendations. Whether it’s a retailer suggesting the right product or a media platform curating content, personalization has to feel instantaneous. Edge native deployment allows user behavior data to be collected and processed locally, with built-in privacy protections. Nearby databases and caching can speed the lookup of past interactions while AI models run on LKE or virtual machines, depending on complexity.
The entire cycle (input, inference, and output) can happen in less than 200 milliseconds. This level of responsiveness is why more than half of enterprises already see personalization as a core AI capability, according to the Forrester survey.
The same model can power more complex future use cases, such as visual search (customers upload an image and get instant, AI-enhanced results) or voice-driven applications (low-latency streaming makes conversations feel natural). As organizations push into these new areas, the need for compute and storage closer to users becomes even more pronounced.
Reducing risk with the right stack
For AI to succeed at scale, engineers need a platform built for resilience, predictability, and security. This is Akamai’s vision: Build a stack that not only supports today’s workloads in core regions but also evolves to bring AI closer to users in the future.
Today, the App Platform helps teams simplify deployment by integrating open source projects into a production-ready environment, reducing the complexity of standing up applications. LKE makes it easy to rapidly deploy models with autoscaling, paired with a flat-rate pricing model that keeps costs predictable even during periods of bursty demand.
Managed databases deliver low-latency reads and built-in failover to safeguard customer-critical paths, while virtual machines provide the flexibility for long-running or specialized workloads, all supported by Zero Trust integration.
What ties these components together, now and in the future, is proximity and predictability. AI applications can already run with stable costs and reliable performance across Akamai’s global footprint, and the trajectory is moving toward extending these benefits even closer to customers as GPU and platform availability continue to expand.
Security and compliance by design
A major hurdle when it comes to scaling AI is convincing both companies and customers that they can trust the technology. According to the Forrester study, 63% of organizations cite security as a concern, 55% worry about compliance, and 45% fear damage to the brand’s reputation if things go wrong. These risks are real, but they can be mitigated with a security-first approach built directly into the edge.
Edge native models allow policies to be enforced before requests ever reach an AI system. Traffic can be isolated, rate limited, and filtered through firewalls and bot defenses. Zero Trust principles apply not only to users but also to workloads themselves.
Additionally, enterprises can adopt safe deployment practices such as canary rollouts, in which new features are tested on a small fraction of users, and red-team exercises to uncover weaknesses before they affect customers. For organizations that feel their current platforms leave gaps, prebuilt reference architectures and Golden Paths offer a way to build consistently and securely without starting from scratch.
With an edge native model, companies can safeguard both the brand and the customer experience while continuing to innovate.
The 3 phases of global AI adoption at the edge
One of the most important lessons from Forrester’s data on enterprise adoption patterns is that AI does not need to be rolled out everywhere at once. A phased approach allows organizations to balance ambition with risk management.
Phase 1
Companies leading the charge with AI typically start with a focused pilot like automated service resolution for a specific support queue. They define clear success metrics — such as response times, automation rates, and escalation thresholds — and establish guardrails to manage risk.
Phase 2
Once the pilot delivers results, the next phase is about scaling. This might mean extending personalization across multiple customer touchpoints, such as a website and a mobile app, and deploying the capability across several regions. This step reflects the near-term global ambitions that more than 70% of enterprises report.
Phase 3
Finally, in Phase 3, organizations broaden their use of AI into newer areas like visual search or procedural content creation. By this point, they’ve established strong standards for performance and safety, making it possible to innovate responsibly.
AI foundations on the edge
The data tells us that enterprises are going all in on AI. Adoption rates are high, companies are actively connecting AI applications to return on investment (ROI), and many have clear goals to take the technology global. Success will depend on execution.
Customer-facing AI applications are latency bound, full of spikes, and data sensitive. Edge native models provide a better path to delivering the infrastructure that enterprise AI requires.
For AI engineers, the challenge is to build smarter applications in a way that customers can trust and businesses can scale. At Akamai, we believe the edge is where enterprise AI ambitions meet reality. Our distributed platform is built to help organizations deploy AI globally without sacrificing security, performance, or compliance.
Learn more
For all the results, download the full Forrester report.
Tags