Stop LLM Attacks: How Security Helps AI Apps Achieve Their ROI

Christine Ferrusi Ross

Written by

Christine Ferrusi Ross

August 21, 2025

Christine Ferrusi Ross is a Product Marketing Director at Akamai, where she leads go-to-market messaging for the Application Security portfolio. Prior to Akamai, she worked with blockchain and security startups on product/market fit and positioning. She also spent many years as an industry analyst helping organizations buy and manage emerging technologies and services.

 

The opportunity to see major ROI from AI is massive, but only if the security and business teams work together.
The opportunity to see major ROI from AI is massive, but only if the security and business teams work together.

Contents

The fast pace of innovation often puts business teams and security teams at a communication disadvantage. It’s a tale as old as time: Line-of-business speaks in the language of user experience (UX), product roadmaps, and deadlines; enterprise security speaks of risks, threats, and attack signals. Tension rises as both sides push for what's important in their worlds — and often miss common ground.

Playing on the same team

The skyrocketing investment in applications that use artificial intelligence (AI) or large language models (LLMs) gives both teams a fresh start. Business executives may not know the ins and outs of nuanced threats such as LLM prompt injection, but they can sense the risks. 

For example, in a recent KPMG survey, U.S. business executives show a keen awareness of AI's risks: 81% cited cybersecurity as the biggest barrier to AI adoption, while 78% identified data privacy as a primary concern.

Security teams can (and should) respond in kind by demonstrating awareness of the business drivers behind AI innovations like chatbots and agents. Adding the business context helps security teams and their business stakeholders close the gap between AI security and protecting the business.

Every AI-driven app has a purpose, grounded in imperatives ranging from customer needs to sales objectives. And as organizations will soon find out, those imperatives are under threat from a new breed of AI-specific attack methods — and both teams will have to come together to get the desired return on their investment (ROI) in AI-driven apps.

Focus on the purpose of the app and the expected ROI

Let’s dig deeper into the applications themselves. They’re not “AI apps” to the business. The execs and application development teams that invested in and built these apps talk about them based on their purpose first and the technology used to build them second

For example, customer service isn’t talking about their AI app; they’re talking about how their chatbot that uses AI is going to help them give customers more relevant answers, faster and more efficiently.

And these customer service stakeholders have objectives for these apps — create revenue, streamline operations, and reduce costs. They likely spent millions building the apps and are very concerned about getting the desired ROI.

Business apps that use AI and their objectives

Let’s look at some examples of business applications that use AI and the desired objectives of those apps, including:

  • Customer service chatbots
  • Recommendation engines
  • Personalized search
  • Diagnostics

Customer service chatbots

Companies are building AI-driven chatbots that live on their websites to provide users with more accurate, faster answers than were possible before. Businesses may be looking for cost savings and operational improvements, as well as improved customer satisfaction scores.

Recommendation engines

Companies with complex products, like financial services, are using recommendation engines to help customers decide which product is best, based on multiple criteria, helping the company increase sales and helping the customers choose the most appropriate solution for their needs. They’ll likely measure ROI based on higher sales and customer lifetime value, for example.

Organizations have long sorted search results by relevance to the keywords, but now they’re enhancing search capabilities with AI so that customers get recommendations that are most relevant and specific to that user. Success for these apps could be reduced time to reach the desired answer, higher customer satisfaction, and more engagement with the company’s website.

Diagnostics

Healthcare organizations are using AI-powered apps to help narrow the field of potential conditions to the most likely ones, allowing the patient and healthcare provider to diagnose more quickly. Other industries that also require diagnostics use similar apps. The outcomes include operational efficiency, a reduction in the number of in-person visits needed to get to a diagnosis, and better staff-to-consumer ratios for in-person meetings.

How business operations and security converge in AI-driven apps

Here’s where the business problem — getting ROI from AI-driven apps — converges with security. The expected returns from those business-focused applications could easily be lowered or wiped out completely if those apps are manipulated or attacked. This is where security can drive not only protection, but also enhance business value. 

Some examples of potential threats include:

  • Threat actors manipulating chatbots 
  • Attackers exploiting diagnostic tools

Threat actors manipulating chatbots

A threat actor could manipulate the customer service chatbot with a series of prompts to get discounted pricing or free services for a product that’s been purchased. Imagine that happens twice per day (a small attack by security standards) — now the company sees lower sales revenue than expected, lowering the app’s ROI. 

Although this is not technically a security issue, security pros can use this as an opportunity to increase their value to the business because they can help the company realize stronger ROI.

Attackers exploiting diagnostic tools

An attacker could manipulate the diagnostic tool into providing malicious output by using particular prompts. Imagine, for example, a diagnostic engine that tells patients that certain conditions are the result of their poor lifestyle choices and that they shouldn’t bother trying to fix those conditions. 

The costs of legal and potentially regulatory actions resulting from such output could wipe out any cost savings or process improvements that the company expected from the app. But if security can provide protection against these attacks, they’re protecting their organizations and patients from more than just data loss or reputation damage. 

5 ways security teams can partner with business teams for better results

Overwhelmed security teams might be tempted to say that ensuring that these AI apps run properly is the responsibility of the business units who built them. But that self-eliminates the security team from decision-making,  which may lead the business units to shut them out of any future conversations as well. 

Instead, security execs need to use this opportunity to show that they must be at the table when making business decisions — so that security can enhance and support the business, not slow it down. 

Five ways that security teams can do this include:

  1. Educate the business about the detrimental effect of LLM attacks on ROI 
  2. Communicate best practices for security AI in a business context
  3. Hire dedicated AI security personnel
  4. Deploy AI-specific protections
  5. Establish an AI governance framework around AI data and use

Educate the business about the detrimental effect of LLM attacks on ROI

It’s important to educate the business about the potential loss of revenue, increased costs, and other impacts that may lower (or even erase) any projected ROI if AI-driven apps are attacked. Doing so will show that the security team is proactive and thinking about business objectives with a partner mindset.

Communicate best practices for securing AI in business context

Let’s revisit the example of a threat actor manipulating a customer service chatbot to get discounted pricing or free services. Security teams can help business leaders understand what capabilities are needed to stop these attacks by weaving in the business stakes by explaining that:

  • This is prompt injection, which calls for a solution like Akamai Firewall for AI that can detect and block malicious inputs.
  • A traditional tool can’t help; you need detections that are designed specifically for AI’s unique vulnerabilities and risks.
  • For outputs, you need solutions that can detect app responses that are toxic, off-topic, or otherwise undesired.

Hire dedicated AI security personnel

When your team has dedicated AI security experts on it, they’ll see problems faster and mitigate them more quickly. Over time, those team members can then train the rest of the team so that when AI adoption expands, security is ready.

Deploy AI-specific protections

AI-specific attacks can cause repercussions beyond damage to data, revenue, and reputation. Solutions such as Firewall for AI are designed to address runtime threats and help protect against AI-specific attacks that can lead to a number of damaging consequences.

Establish an AI governance framework around AI data and use

Security teams can work with business execs to create and manage a framework that helps the entire organization glean the most value from AI without being impeded by threats.

Learn more

The opportunity to see major ROI from AI is massive, but only if the security and business teams work together. Dive deeper into this topic by reading our new white paper on how to secure AI apps in the age of rapid innovation.



Christine Ferrusi Ross

Written by

Christine Ferrusi Ross

August 21, 2025

Christine Ferrusi Ross is a Product Marketing Director at Akamai, where she leads go-to-market messaging for the Application Security portfolio. Prior to Akamai, she worked with blockchain and security startups on product/market fit and positioning. She also spent many years as an industry analyst helping organizations buy and manage emerging technologies and services.