The Edge of Agency: Defending Against the Risks of Agentic AI

Maxim Zavodchik

Written by

Maxim Zavodchik

August 15, 2025

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik is an experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.

This shift brings powerful new capabilities, but also a new class of security risks.
This shift brings powerful new capabilities, but also a new class of security risks.

Contents

Executive summary

  • Agentic AI delivers greater autonomy, but at the cost of increased complexity and unpredictability.

  • Companies are using agentic applications to autonomously execute business processes. Semi-independent agents go beyond conventional generative artificial intelligence (GenAI) risks by chaining through memory, tools, and other agents, which expands the attack surface, blurs trust boundaries, increases the blast radius, and introduces new classes of attack.

  • The immature agentic ecosystem, built for functionality and not for security, combined with agents operating under broad mandates, leaves the door open to impersonation attacks and unauthorized access.

  • Multi-agent systems extend the threat beyond a single compromised agent, creating new opportunities for lateral propagation and cascading behaviors that escalate localized issues into systemic failures through agent-to-agent interactions.

  • As businesses increasingly allow AI agents to access applications on users’ behalf, the same interfaces used by helpful agents can also be exploited by malicious ones — obscuring the line between legitimate autonomous use and targeted abuse.

  • Adversaries are shifting toward “vibe scraping” — using autonomous agents for content extraction and unfair trading that pursue outcomes, rather than follow step-by-step instructions, resulting in a more stealthy, adaptive, and elevated form of abuse.

  • Akamai helps reduce these risks at the point of interaction by detecting and neutralizing attacks before they reach the planning or execution layer to enable businesses with secure AI adoption without losing control.

Introduction: From automation to autonomy

The age of agentic AI is here. We are at the turning point when systems no longer just automate tasks but actively make decisions and take actions in pursuit of goals. Unlike past decades in which automation brought consistency and scale, this new era not only introduces autonomy and sophistication, but also complexity and unpredictability. As organizations cross this threshold, they must prepare for a dramatically expanded attack surface, shifting trust boundaries, and a new class of security challenges.

Unlike traditional automation tools that execute predefined instructions, agentic AI systems operate with goals. They plan, decide, and act — sometimes across extended timelines and across multiple systems — all on behalf of humans. These agents are not simply responding; they are pursuing outcomes. And in doing so, they introduce a profound shift in how security leaders must approach application and infrastructure protection.

It’s the shift from automation tools that help you do something to systems that try doing something for you.

Why agentic AI is different

Agentic AI changes the risk surface in ways that go beyond conventional GenAI concerns. With agents acting semi-independently, often chaining through memory, external tools, and other agents, the predictability that security relies on begins to dissolve.

The agentic shift challenges established assumptions in security and produces:

  • Unpredictable behavior: Agents react to dynamic contexts, leading to nondeterministic actions.

  • An expanded attack surface: Agents interact across APIs, tools, and identities, often collaborating with other agents — blurring trust boundaries.

  • New layers of risk: Long-term memory, tool access, and multi-agent orchestration create new layers of risk.

This autonomy blurs the trust boundaries that organizations once considered stable. Agent identities, intents, and interactions are harder to verify and track. An agent operating under a service account can orchestrate a chain of actions that accesses APIs, triggers workflows, or queries sensitive data stores — not with malice, but simply because that is how it pursues its task.

Autonomy with consequences

While the agentic paradigm shift unlocks powerful new capabilities, it also introduces a fundamentally different class of security risks — risks rooted not just in use, but in the very design and architecture of these systems, including an immature security ecosystem, excessive agency and hijacking risks, and lateral propagation and cascading hallucination.

Immature security ecosystem

Many emerging agent protocols, like the Model Context Protocol (MCP), were designed for functionality — not security. Basic safeguards such as identity binding, authentication, validation, and policy enforcement are often missing or optional. This leaves the door wide open for impersonation, spoofing, and unauthorized access.

Excessive agency and hijacking risks

Agentic systems often operate with broad mandates. Without clear boundaries, attackers can hijack their behavior — coaxing them into actions their designers never intended. A subtle prompt injection can turn a helpful planner into a dangerous proxy.

Lateral propagation and cascading hallucinations 

Compromised agents rarely fail in isolation — they can influence or mislead others, enabling misinformation or malicious goals to spread via agent-to-agent interactions. This breaches containment boundaries, drives lateral movement, and transforms localized issues into systemic failures.

Autonomous access, autonomous threats

With the rapid rise of agentic AI, businesses are beginning to see a new class of autonomous systems — search agents, shopping assistants, and decision-making copilots — that access their applications on behalf of users. These agents independently navigate websites, query APIs, and retrieve content, signaling a shift in how digital services are consumed.

But as access becomes more autonomous, so do the threats. The same pathways designed to support helpful AI agents can be exploited by malicious ones — scraping proprietary content, manipulating pricing or inventory, and automating high-frequency transactions that distort business operations and degrade user experience.

Vibe scraping

This evolution doesn’t just empower enterprises; it transforms how adversaries operate. Attackers can now deploy autonomous agents that execute adaptive, large-scale attacks with minimal oversight. Rather than relying on rigid scripts, they set high-level goals — like acquiring exclusive inventory or extracting competitive data — and let AI agents decide how to get there. The result is a new form of abuse: vibe scraping, which is fluid, opportunistic, and difficult to detect.

The core challenge for defenders is intent ambiguity. Legitimate AI agents and malicious automation often use the same toolchains, access patterns, and APIs, which blurs traditional trust boundaries and makes it increasingly difficult to separate trusted use from targeted abuse.

Resilience by design: Securing agentic application channels

Akamai helps address the challenges — whether stemming from third-party agents that access applications or from internally developed agentic workflows — by providing protection where agentic access actually occurs: at the edge and across APIs, application flows, and the network layer.

Akamai’s capabilities support granular access controls, abuse mitigation, and segmentation that limit the blast radius of misbehaving or compromised agents. Whether it’s stopping unauthorized AI scraping, enforcing behavioral boundaries, detecting prompt injections resulting in excessive and cascading access, or preventing lateral spread, Akamai enables organizations to maintain control as they adopt more autonomous and AI-driven systems.

Akamai Firewall for AI: Specialized threat protection

One of the primary triggers for agentic threat chains is untrusted user input — particularly prompt injection attacks that subtly divert agents from their original objectives. Because agents rely on natural-language instructions passed through prompts, a malicious actor can slip in commands that cause the agent to misinterpret its goals, execute unsafe or unintended actions, or interact with tools incorrectly, which may lead to unauthorized access, data leakage, or arbitrary command execution.

Akamai Firewall for AI defends against this by detecting and neutralizing prompt-injection attempts before they ever reach the planning or execution layer. At its core is the intelligence platform, which fuses cross-domain signals coming from Akamai App & API Protector, Akamai API Security, and bot and abuse protection solutions. This unified intelligence surfaces anomalous flows from rogue agents and detects attacks that target customers' agentic workflows.

Advanced API security: Context-aware access monitoring

Advanced API security plays a critical role in managing agentic AI risks, offering visibility into systems that often access APIs and tools with broad or excessive permissions.

By analyzing traffic for intent mismatches — where agents make valid calls but in patterns that deviate from expected workflows — it becomes possible to detect misuse. This includes identifying agents that chain authorized actions to achieve unintended or unauthorized outcomes, which can highlight subtle forms of abuse.

Zero Trust: Constraining agentic autonomy

Akamai helps enterprises deploy agentic AI securely by combining access control with network segmentation. With Enterprise Application Access, agents get identity-based access based on role and context. All interactions are proxied and monitored, preventing direct exposure.

At the same time, Akamai Guardicore Segmentation limits agents to predefined communication paths and blocks unauthorized lateral movement. Together, these controls ensure that agents only access what they need, which reduces privilege creep and enforcing clear trust boundaries.

Bot and abuse protection: Intent-based agent profiling

Akamai bot and abuse protection solutions help distinguish legitimate AI agent use from adversarial agentic orchestration. They mitigate content scraping, automation abuse, and identity obfuscation — without disrupting real users or enterprise agents — thereby supporting a secure transition to new agentic business models.

Layer 7 distributed denial-of-service protection: Memory throttling

Memory is a core pillar of agentic AI, enabling context-aware reasoning and sustained interactions — but it also presents a soft underbelly. Akamai App & API Protector helps defend applications against resource exhaustion attacks that exploit this dependency, especially when launched through distributed agent frameworks.

Conclusion: Building agentic AI, securely

As AI agents evolve from simple assistants to autonomous actors, they’re no longer just tools we control — they’re systems that make decisions on our behalf. This shift brings powerful new capabilities, but also a new class of security risks.

Security teams today face a dual mandate: enable innovation and prevent disruption. Agentic AI will drive productivity, unlock insights, and enable new business models — but it must be deployed or consumed with control.

Furthermore, as businesses increasingly allow AI agents to access applications on users’ behalf, the same interfaces and APIs designed for agentic consumption can also be exploited by malicious agents — blurring the line between legitimate use and targeted abuse.

Ensure that agentic AI works for your enterprise, not against it

Akamai provides the visibility, intelligence, and enforcement to ensure that agentic AI works for your enterprise, not against it. With world-leading application security protections and Zero Trust controls, we evolve our solutions alongside emerging agentic systems to help organizations secure the autonomy frontier.

Let agents work for you — not against you.



Maxim Zavodchik

Written by

Maxim Zavodchik

August 15, 2025

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik is an experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.