Agentic AI Set to Rise, With New Cybersecurity Risks: Gartner
The autonomous technology could help CIO’s deliver their AI goals but needs legal and ethical guidelines
Agentic AI could dramatically improve AI’s potential and could be included in 33% of enterprise software applications by 2028, up from 1% today, according to management consultancy Gartner.
But along with potentially game-changing benefits, the technology brings new risks and security threats above and beyond those inherent to AI models and applications, said Avivah Litan, a distinguished vice president analyst at Gartner.
Until now, large language models (LLMs) have not acted on their own initiative, but with agentic AI, LLMs can act autonomously with minimal human supervision. They could adapt to their context and execute goals in complex environments.
This ability could dramatically increase AI’s potential by enabling it to examine data, perform research, compile tasks and complete them in the digital or physical world via APIs or robotic systems.
For example, future agentic AI systems with full agency could learn from their environment, make decisions and perform tasks independently.
Gartner, which listed agentic AI as its top strategic technology trend for 2025, predicted in a briefing note that by 2028, AI agent machine customers could replace 20% of the interactions of human readable digital storefronts.
And by 2028, at least 15% of day-to-day work decisions could be made autonomously through agentic AI, up from zero in 2024.
However, Gartner said users should be aware of the additional risks.
“With standard AI models and applications, the threat surface is often limited to inputs, model processing and outputs, the software vulnerabilities in the orchestration layer and the environments that host them,” said Litan.
“When using AI agents, the threat surface expands to include the chain of events and interactions they initiate and are part of, which by default are not visible to and cannot be stopped by human or system operators.”
Some threats include data exposure or exfiltration anywhere along the chain of agent events, unauthorized, unintended or malicious coding logic errors made by AI agents that lead to data breaches or other threats, Litan said. There are also supply chain risks from using libraries or code downloaded from third-party sites for use in agents.
To manage these threats, Litan said IT leaders should educate their organisation on the inherent risk of AI agents, which may be promoted in enterprise products, and take steps to mitigate them. Actions could include detecting and flagging anomalous AI agent activities and those that violate specific preset enterprise policies, as well as viewing and mapping all AI agent activity and information flows.
Enterprises are already implementing or customizing products with AI agent capabilities including Microsoft Copilot Studio, Azure AI Studio, AWS Bedrock and Google NotebookLM.
Yet a large gap still exists between current LLM-based assistants and full-fledged AI agents. This is expected to close first for narrowly scoped activities and could eventually expand as the world learns how to build, govern and trust agentic AI solutions, Gartner said.
This article first appeared in IoT World Today's sister publication AI Business.
About the Author
You May Also Like