3 Key Areas to Focus on When You're Evaluating AI Security

Berk Veral

Written by

Berk Veral

August 01, 2025

Berk Veral

Written by

Berk Veral

Berk Veral is the Senior Director of Product Marketing at Akamai.

The question isn't whether you need AI security — it's whether you're going to be proactive about it or learn the hard way.
The question isn't whether you need AI security — it's whether you're going to be proactive about it or learn the hard way.

Contents

We are all painfully aware that the AI security space is moving faster than anyone expected. What started as a bunch of vendors slapping "AI-powered" stickers on existing tools has evolved into something much more sophisticated — and, frankly, much more necessary.

While every company is trying to make sensible investments in AI and offer innovative solutions, some are already disillusioned with the initial foray of so-called “AI-native security solutions” to secure applications that use AI models. Time is proving to be the ultimate enforcer that sifts through the noise and identifies real solutions.

The rapid changes in this space were especially clear at the 2025 Gartner Security & Risk Management Summit, and some key takeaways are worth emphasizing. 

Welcome to the AI security era

One key theme that is gaining significance is what's being called the AI security platform landscape. Think of it as covering two main fronts: making sure you can safely use AI tools (secure consumption) and making sure you can safely build AI applications (secure development).

The wild part? All the security categories that used to live in their own little silos — Cloud Native Application Protection Platform (CNAPP), security service edge (SSE), AI Security Posture Management (AI-SPM) — are suddenly bumping into one another in the AI space because in the world of AI with new attack methods, you cannot protect what you cannot see. 

But here's the kicker: Purpose-built AI security functions are actually starting to mature and differentiate themselves from the generic offerings.

The governance reality check

If your organization is rolling out one of the AI assistants — and let's be real: Who isn't at least thinking about it? — you need to wrap your head around role-aware AI governance.

This isn't your typical "set it and forget it" access control. We're talking about understanding who can ask what kinds of questions, what data they can access through AI interactions, and how to prevent your chatbot from becoming an accidental data leak machine. The infamous AI guardrails are becoming foundational security measures; think AI firewalls.

The technical stuff that actually matters

Why and how are AI threats different? The key is in the core AI model concepts that are turning into major attack vectors: context windows, system prompt leakage, and vector database limitations.

Context windows

Context windows are basically how much information an AI model can "remember" during a conversation — which sounds unrelated to security until you realize someone could potentially extract sensitive data by manipulating these boundaries.

System prompt leakage

System prompt leakage happens when the hidden instructions that guide an AI model get exposed. Think of it like finding the cheat codes that show you exactly how to manipulate the system.

Vector database limitations

Vector database limitations relate to how AI systems store and retrieve information. These databases have quirks that attackers are learning to exploit.

These aren't just theoretical problems anymore — they're showing up in real-world attacks.

The vendor land grab

Everyone who sits in the traffic path is scrambling to add AI controls to their platforms. It makes sense from a positioning standpoint, but the depth and quality of these bolt-on features vary wildly. Some vendors are clearly just checking boxes, while others are building genuinely useful capabilities.

Meanwhile, the AI-SPM category, which honestly looked pretty superficial when it first emerged, is starting to show real differentiation from traditional CNAPP tools when it comes to protecting AI applications specifically.

What to look for

When you're evaluating these platforms, focus on three key areas:

  1. The ability to customize controls for your specific use cases. Generic AI security is like generic antibiotics — sometimes it works, sometimes it doesn't.

  2. Vendor visibility and realigned defenses. How well does the vendor understand the attacks they're trying to prevent? A lot of vendors are still playing catch-up here.

  3. RAG-specific control mechanisms. Retrieval-Augmented Generation (RAG) systems — in which AI models pull information from your company's data to answer questions — are a particularly attractive target and need specialized protections that most traditional security tools weren't built to handle.

The bottom line

The AI security market is no longer just theoretical talk and fluff. Real differentiation is happening, real risks are emerging, and organizations that treat AI security as an afterthought are going to get burned. The good news? The tools to address these challenges are finally starting to mature. 

The question isn't whether you need AI security — it's whether you're going to be proactive about it or learn the hard way.

That’s why Akamai is committed to AI security. Our latest offering, Akamai Firewall for AI, is proof of how we are helping our customers start their journeys by securing their applications that use large language models (LLMs).



Berk Veral

Written by

Berk Veral

August 01, 2025

Berk Veral

Written by

Berk Veral

Berk Veral is the Senior Director of Product Marketing at Akamai.