Agentic AI Paves the Way for Sophisticated Cyberattacks

Gartner analysts discuss how agentic AI will transform business operations by 2028, while also raising the risk of cyberattacks

Berenice Baker, Editor, Enter Quantum

January 14, 2025

7 Min Read
A cybersecurity team at work
Getty Images

Agentic AI is widely predicted to be one of the key technology themes for 2025 due to its ability to autonomously analyze data, make decisions and execute tasks. But it also presents unprecedented risks.

Gartner’s recent research report “Top Strategic Technology Trends for 2025: Agentic AI” forecasts that by 2028, 33% of enterprise software will incorporate Agentic AI, 20% of digital storefront interactions will be conducted by AI agents and 15% of day-to-day decisions will be made autonomously.

However, agentic AI is opening the door to advanced cyberattacks, including smart malware, prompt injections and malicious AI agents. Without proper guardrails, organizations face operational disruptions, governance breakdowns and reputational damage.

In this interview, Gartner distinguished vice president analyst Gary Olliffe and research vice president of AI and cybersecurity Jeremy D’Hoinne discuss how organizations can get the most from agentic AI while preventing the introduction of security vulnerabilities.

IoT World Today: What is agentic AI and what can it do?

Gary Olliffe: Agentic AI uses the knowledge and intelligence that's embedded in LLM-based models to help drive process automation by taking decisions and planning actions rather than just responding to questions.

Related:Sony Honda’s Afeela Automated Car Launching This Year; Starting at $90K

The new component is software orchestration. We give it information about a task that we want to complete together with some guidance and structure on how it might be completed. We're asking the language model to give us a plan or an approach to complete that task. It’s system-driven behind the scenes, rather than relying only on what the user is typing into a chat prompt.

Jeremy D'Hoinne: This new component translates your approximate questions into an action plan. Think about all of the steps you use when you use a search engine to plan your vacation, the LLM  figures them out based on its training set and books it. Before that, the assistant would give you a recommendation but you’d have to book it.

What are the risks associated with agentic AI systems?

Gary Olliffe: Because these agents are using LLMs, they're still susceptible to hallucinations or misinterpretation. When giving the agent a broad requirement, it's going to interpret that in a certain way and that may not be the way that the user or the originator of the request intended.

There’s an operational risk that we need to engineer out to the point where we trust the agent enough to do the work that it's intended to do. Low-risk use cases are going to be the first opportunities for business where, if an error does happen, it's of low consequence, like if an email gets sent to the wrong user one out of a million times.

Related:Quantum Cybersecurity in 2025: Post-Quantum Cryptography Drives Awareness

But if we're talking about financial transactions, we're talking about reputational impact for an organization. You have to think very carefully about that trust level, because it is a nondeterministic system.

Jeremy D'Hoinne: The autonomy multiplies the risk. If I do one money transfer buyer and make a mistake, it's capped by the amount I can transfer from my own account. I'm allowing it to do thousands of transactions a minute in high-frequency trading, the risk is much higher. The scale, the type of actions and the level of autonomy are all factors.

What security risks might agentic AI introduce? 

Jeremy D'Hoinne: With any new technology you are going to have existing threats, new threats and traditional vulnerabilities with bigger consequences. If I have a bug that crashes the orchestration layer, because the orchestration layers call a thousand actions instead of just one, I could create a denial of service simply because I messed up the orchestration layer. I tell clients that the sum of the vulnerabilities is higher than the vulnerability of each component.

Another way of seeing it is that we know the best way to secure automation is to limit the agency you give to an agent based on the risk. If you don't trust the agent and can't bring 100% vulnerability assessment to this agent, we are going to limit the agency until we know better.

Gary Olliffe: Another factor is identity. These agents are often acting on behalf of a user and you need to be able to propagate and trace the identity of the person that asked the agent to do something, it's going to act on their behalf. When it wants to transfer some funds, it acts on my bank account, credit card, document repository in Microsoft 365 or my sales opportunities in Salesforce. There's an identity propagation complexity to that autonomy that complicates the creation of these solutions.

Hugging Face, the open community for model sharing, released an agent framework just before Christmas, called Smolagents and its focus is code execution. The LLM is not just saying, I want to use this tool and here's some information to pass to the tool, it's generating code to solve a problem and running it. You need to make sure you're running that code in a secure, protected environment because that code could do arbitrary things.

There are demos or frameworks that do that code execution that if you run them on your local machine, you could instruct the agent to delete files to modify data or to break things that you wouldn't expect. Code execution capability is another level of risk, beyond the tools and the tool calling and the API side of things, where you've got some existing protection.

What guardrails can protect against agentic AI cybersecurity risks?

Gary Olliffe: It requires a multi-level, strength-in-depth approach. There are guardrails at the LLM level that are implemented by your provider. For example, OpenAI, Anthropic or Google protect against not safe for work content like terrorist information, abuse and sexual content.

Then you've got any enterprise-level organizational guardrails that you might apply in addition to those that might be implemented at an AI gateway with the site. It’s a capability that will become a feature of platforms in due course, but at the moment, it's an extra layer of protection.

Within the agentic software itself, where it's a personal agent rather than an enterprise-scale solution, that’s another point where you have to put the guardrails in place. These include the prompts that you will accept and those you want to filter out before they even ever get to the LLM, the checks you want to put in place around the responses to the planning requests - that orchestration piece. The narrower the scope of an agent, whatever an agent ends up being, the easier it is to put a boundary around it.

We're quickly making the leap from talking about an agent to talking about multi-agent systems, where agents talk to other agents that have particular capabilities, then you've got an extra level of guardrails that you've got to think about.

Jeremy D'Hoinne: From a security point of view, if your business wants to use an agent or a multi-agent system you will have choices. You can put cooperation on the input, that works well. If you have a single agent, you can put guardrails on the input and output.

The fewer types of actions the agent is capable of doing, the easier it is for security to put control and guardrails on the action rather than the LLM. That's why multi-agent systems with a well-defined role for each agent are easier to secure individually than a big black-box agent.

In terms of the LLM itself, the challenge with guardrails is that we find new prompt injection techniques every two weeks. There's a limit to how much you can trust guardrails for the LLM part and that's why agent security is going to probably be a mix of rules and monitoring on the LLM part and policy and enforcement on the action part.

This article was first published in IoT World Today's sister publication AI Business.

About the Author

Berenice Baker

Editor, Enter Quantum

Berenice is the editor of Enter Quantum, the companion website and exclusive content outlet for The Quantum Computing Summit. Enter Quantum informs quantum computing decision-makers and solutions creators with timely information, business applications and best practice to enable them to adopt the most effective quantum computing solution for their businesses. Berenice has a background in IT and 16 years’ experience as a technology journalist.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like