MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
agents
Search

AI agents can (and will) be scammed

Tuesday April 1, 2025. 12:00 PM , from ComputerWorld
AI agents can (and will) be scammed
Generative AI’s newest superstars — independent-acting agents — are on a tear. Organizations are adopting the technology at a staggering rate because they can use APIs or be embedded with standard apps and automate all kinds of business processes.

An IDC report predicts that within three years, 40% of Global 2000 businesses will be using AI agents and workflows to automate knowledge work, potentially doubling productivity where successfully implemented.

Gartner Research is similarly bullish on the technology. It predicts AI agents will be implemented in 60% of all IT operations tools by 2028, sharply up from less than 5% at the end of 2024. And it expects total agentic AI sales to reach $609 billion over the next five years, Gartner said.

Agentic AI is gaining popularity so quickly because it can autonomously make decisions, take actions, and adapt to achieve specific business goals. AI agents like OpenAI’s Operator, DeepSeek, and Alibaba’s Qwen aim to optimize workflows with minimal human oversight.

Essentially, AI agents or bots are becoming a form of digital employee. And, like human employees, they can be gamed and scammed.

For instance, there have been reports of AI-driven bots in customer service being tricked into transferring funds or sharing sensitive data due to social engineering tactics. Similarly, AI agents handling financial transactions or investments could be vulnerable to hacking if not properly secured.

In November, a cryptocurrency user tricked an AI agent named Freysa to send $50,000 to their account. The autonomous AI agent had been integrated with the Base blockchain, designed to manage a cryptocurrency prize pool.

To date, large-scale malicious abuse of autonomous agents remains limited, but it’s a nascent technology. Experimental instances show potential for misuse through prompt injection attacks, disinformation, and automated scams, according to Leslie Joseph, a principal analyst with Forrester Research.

Avivah Litan, a vice president and distinguished analyst at Gartner Research, said AI Agent mishaps “are still relatively new to the enterprise. [But] I have heard of plenty potential mishaps discovered by researchers and vendors.”

And AI agents can be weaponized for cybercrime.

Gartner Research

“There will be a great AI awakening — people learning how easily AI agents can be manipulated to enact data breaches,” said Ev Kontsevoy, CEO of Teleport, an identity and access management firm. “I think what makes AI agents so unique, and potentially dangerous, is that they represent the first example of software that is vulnerable to both malware and social engineering attacks. That’s because they’re not as deterministic as a typical piece of software.”

Unlike a large language model (LLM) or genAI tools, which usually focus on creating content such as text, images, and music, agentic AI is designed to emphasize proactive problem-solving and complex task execution, much as a human would. The key word is “agency” — software that can act on its own.

Like humans, AI agents can be unpredictable and easily manipulated by creative prompts. That makes them too dangerous to be given unrestricted access to data sources, Kontsevoy said.

Unlike human roles, which have defined permissions, similar constraints haven’t been applied to software. But with AI capable of unpredictable behavior, IT shops are finding they need to impose limits. Leaving AI agents with excessive privileges is risky, as they could be tricked into dangerous actions, such as stealing customer data — something traditional software couldn’t do.

Organizations, Kontsevoy said, must actively manage AI agent behavior and continually update protective measures. Treating the technology as fully mature too soon could expose organizations to significant operational and reputational risks.

Joseph agreed, saying businesses using AI agents should prioritize transparency, enforce access controls, and audit agent behavior to detect anomalies. Secure data practices, strong governance, frequent retraining, and active threat detection can reduce risks with autonomous AI agents.

Growing use cases amplify vulnerabilities

According to Capgemini, 82% of organizations plan to adopt AI agents over the next three years, primarily for tasks such as email generation, coding, and data analysis. Similarly, Deloitte predict enterprises using AI agents this year will grow their use of the technology by 50% over the next two years.

Benjamin Lee, a professor of engineering and computer science at the University of Pennsylvania, called agentic AI a potential ”paradigm shift.” That’s because the agents could boost productivity by enabling humans to delegate large jobs to an agent instead of individual tasks.

But by virtue of their autonomy, Joseph said, AI agents amplify vulnerabilities around unintended actions, data leakage, and exploitation through adversarial prompts. Unlike traditional AI/ML models with limited attack surfaces, agents operate dynamically, making oversight harder.

“Unlike static AI systems, they can independently propagate misinformation or rapidly escalate minor errors into broader systemic failures,” he said. “Their interconnectedness and dynamic interactions significantly raise the risk of cascade failures, where a single vulnerability or misstep triggers a domino effect across multiple systems.”

Some common ways AI agents can be targeted include:

Data Poisoning: AI models can be manipulated by introducing false or misleading data during training. This can affect the agent’s decision-making process and potentially cause it to behave maliciously or incorrectly.

Adversarial Attacks: These involve feeding the AI agent carefully crafted inputs designed to deceive or confuse it. In some cases, adversarial attacks can make an AI model misinterpret data, leading to harmful decisions.

Social Engineering: Scammers might exploit human interaction with AI agents to trick users into revealing personal information or money. For example, if an AI agent interacts with customers, a scammer could manipulate it to act in ways that defraud users.

Security Vulnerabilities: If AI agents are connected to larger systems or the internet, they can be hacked through security flaws, enabling malicious actors to gain control over them. This can be particularly concerning in areas like financial services, autonomous vehicles, or personal assistants.

Conversely, if the agents are well-designed and governed, their very AI’s autonomy could be used to enable adaptive security, allowing them to identify and respond to threats.

Gartner’s Litan pointed to emerging solutions called “guardian agents” — autonomous systems that can oversee agents across domains. They ensure secure, trustworthy AI by monitoring, analyzing, and managing agent actions, including blocking or redirecting them to meet predefined goals.

An AI guardian agent governs AI applications, enforcing policies, detecting anomalies, managing risks, and ensuring compliance within an organization’s IT infrastructure, according to business consultancy EA Principles.

While guardian agents are emerging as one method of keeping agentic AI in line, AI agents still need strong oversight, guardrails, and ongoing monitoring to reduce risks, according to Forrester’s Joseph.

“It’s very important to remember that we are still very much in the Wild West era of agentic AI,” Joseph said. “Agents are far from fully baked, demanding significant maturation before organizations can safely adopt a hands-off approach.”
https://www.computerworld.com/article/3856502/ai-agents-can-and-will-be-scammed.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Apr, Thu 3 - 04:41 CEST