These 4 Threats to AI Applications Require New Cybersecurity Strategies

Swati Kumar

Aug 28, 2025

Swati Kumar

Swati Kumar

Written by

Swati Kumar

Swati Kumar is a writer on the Akamai Cloud team, focusing on technology that helps engineers build and scale faster. She has been writing in the tech space for more than a decade, turning complex topics into clear and engaging stories for developers.

Share

Large language models (LLMs) and AI applications are revolutionizing how enterprises operate and innovate — from AI-powered chatbots in customer service to agentic AI systems that make autonomous decisions. But with that transformation comes a new set of vulnerabilities and a significantly expanded attack surface that traditional cybersecurity solutions alone aren’t designed to handle. 

AI applications and LLMs are different from traditional software in that they are often autonomous and nondeterministic; therefore, they can be unpredictable. This makes them powerful, but vulnerable. 

The AI technologies that enterprises develop, use, and bring to market are often public-facing and interface directly with customers. They rely on vast datasets, open-ended user input, and dynamic responses that can’t always be controlled or anticipated.

New risks require modern strategies

In the face of these security risks, a traditional perimeter-based, rules-driven cybersecurity approach is not sufficient. And the massive investments that enterprises are making in AI are at stake. 

Cybersecurity is no longer an abstract concern with regard to AI adoption; the lack of protection is a very present danger that requires new approaches from today’s security teams. 

Consider these common AI security threats:

  • Hallucinated generative AI output could cause customers to lose trust in your ability to ensure safe interactions with your AI-powered applications.
  • A clever prompt injection could expose proprietary AI models, including those of a chatbot that’s unwittingly divulging its own code to an attacker. 
  • A targeted denial-of-service (DoS) attack on resource-intensive AI systems could drain already stretched cloud budgets or bring AI operations to a halt.

What companies need now is a clear understanding of their AI applications’ vulnerabilities, an awareness of the most imminent security threats, and a knowledge of today’s best practices on how to mitigate them.

The 4 AI security threats companies should know about

In our 2025 report, Securing AI in the Age of Rapid Innovation, we explore the threat categories that we believe today’s cybersecurity teams must address to protect their companies’ AI investments. 

The  four AI security vulnerabilities and fast-emerging attack methods we discuss in the report include:

  1. Prompt injection and jail breaking
  2. Toxic AI output
  3. Data exfiltration and model theft
  4. DoS attacks tuned to AI weaknesses

Prompt injection and jail breaking

These attacks manipulate LLMs by embedding hidden instructions inside user inputs. Malicious prompts can bypass restrictions, leak sensitive data, or even hijack the AI application’s behavior. They can suppress safety controls or manipulate the AI model’s output under the guise of a benign request. In essence, attackers use natural language as an exploit vector.

Toxic AI output

Because AI is now capable of generating entirely new content, outputs can be biased, offensive, misleading, or outright dangerous — even when no malicious actor is involved. This is especially dangerous in customer-facing roles where an offensive chatbot response can trigger viral social media backlash or damage to the brand. Plus, in regulated industries, AI errors can result in noncompliance or litigation.

Data exfiltration and model theft

When it comes to AI systems, traditional data breaches are not only an ongoing risk, but they can also leak sensitive data in response to the right questions. Attackers can systematically extract proprietary training data, trade secrets, or even replicate the behavior of the AI model itself through repeated queries. 

Therefore, model theft is a growing concern. Sophisticated adversaries can clone an expensive AI model just by observing how it responds.

DoS attacks tuned to AI weaknesses

Unlike traditional DoS attacks, which flood systems with traffic, AI DoS attacks focus on resource exhaustion. A small number of carefully written queries can overwhelm GPU use, max out memory, or exhaust API capacity. 

This is a particularly significant risk in cloud environments where compute costs scale with use. Security leaders should remember that AI systems are computationally expensive and easy to overload. 

Security embedded into AI innovation

In the rush to innovate, many enterprises have deployed AI applications without building security into their design. Good news: It’s not too late to add critical security layers to your AI, from chatbots to AI agents.

In Securing AI in the Age of Rapid Innovation, we provide not only a deep analysis of the four key threats facing AI applications but also the best practices to secure against them.

Download the report to explore these attack vectors in depth, with real-world examples, mitigation strategies, and a framework for securing enterprise AI applications.

Swati Kumar

Aug 28, 2025

Swati Kumar

Swati Kumar

Written by

Swati Kumar

Swati Kumar is a writer on the Akamai Cloud team, focusing on technology that helps engineers build and scale faster. She has been writing in the tech space for more than a decade, turning complex topics into clear and engaging stories for developers.

Tags

Share

Related Blog Posts

Cloud
A New Way to Manage Property Configurations: Dynamic Rule Updates
August 22, 2025
Stay up-to-date without the hassle of manual version management or the fear of breaking changes with this update to Akamai’s Property Manager.
Cloud
Accelerating Secure Enterprise Kubernetes Adoption
August 18, 2025
Learn how LKE-E solves critical problems while providing streamlined adoption, operational simplicity, and cost efficiency at scale.
Cloud
Akamai and Bitmovin: Revolutionizing Live and On-Demand Video Streaming
August 13, 2025
Discover how Akamai and Bitmovin’s partnership reduces costs, enhances performance, and delivers personalized video experiences to content providers.