MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
agentic
Search

Nearly half of agentic AI projects will be killed by ’27 due to hype, costs, and risks

Monday July 7, 2025. 06:36 PM , from ComputerWorld
More than 40% of agentic AI projects will be canceled by 2027 due to rising costs, unclear value, or poor risk controls, according to new Gartner Research.

That’s an even higher percentage than a year ago, when Gartner predicted that by the end of this year 30% of generative AI (genAI) projects would be abandoned after a proof of concept. (Agentic AI is the latest phase in the evolution of genAI models; agents can autonomously complete tasks and even mimic the behavior of humans.)

According to a Gartner poll of 3,412 webinar attendees earlier this year, 19% said their organization had made significant investments in agentic AI, 42% had made conservative investments, 8%, no investments — with the remaining 31% taking a wait-and-see approach or were unsure.

The push for agentic AI rollouts is on an upswing, with IDC predicting that within three years, 40% of Global 2000 businesses will be using AI agents and workflows to automate knowledge work, potentially doubling productivity where successfully implemented.

Agentic AI is based on AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution.

Currently, implementations are aimed at bolstering user productivity and incrementally improving applications through the agent-based interface, according to Anushree Verma, a senior director analyst at Gartner. The problem with AI agent projects is, like genAI, they’re often driven by fear of missing out instead of achieving a clearly defined set of metrics and business goals.

It turns out agents aren’t smart, don’t often deliver ROI

Another recent study by Carnegie Mellon University (CMU) and Salesforce assessed the performance of AI agents and found the technology failed at tasks 70% of the time — and many of those tasks were pedestrian at best. CMU researchers created a simulated small software firm called “TheAgentCompany,” where they tested leading AI agents (including Claude 3.5 Sonnet, Gemini 2.0 Flash, GPT‑4o, and others) on multi‑step office tasks — a mix of engineering, sales, HR, and finance jobs.

The agents struggled with simple actions, such as closing pop-up dialogs, interpreting common file formats, or identifying contacts correctly. Some even “cheated” by renaming users to simulate progress. The study also found AI agents had limited human‑like performance. Even top agents only reliably completed a quarter of workplace tasks in a controlled setting.

Graham Neubig, an associate professor in CMU’s Language Technologies Institute (LTI) who directed the development of TheAgentCompany, said the low achievements by the AI agents “met or slightly exceeded” expectations based on benchmarking tools he’d used before. For example, website navigation can be tough for AI tools, as shown by one agent’s inability to close a pop-up window. “It’s a silly little thing that wouldn’t bother a human at all,” Neubig said in a CMU article on the study.

The lack of social skills was evident when one AI agent never bothered to connect with the company’s HR manager when instructed to do so. And the failure to recognize the relevance of a “.docx” extension in another case is proof that the tools often lack common sense.

Agents also have a strange habit of “deceiving themselves,” the study found. “Interestingly, we find that for some tasks, when the agent is not clear what the next steps should be, it sometimes tries to be clever and create fake ‘shortcuts’ that omit the hard part of the task,” the researchers noted.

Organizations won’t see a strong return on investment (ROI) by focusing only on user augmentation. To maximize value, they must automate low-skill, repetitive tasks with genAI, freeing humans for higher-value work “driving both cost savings and efficiency,” Gartner’s Verma said.

AI projects often fail when their scope is unclear and team capabilities are overestimated. Without focus, workers can get stretched too thin and miss key priorities, according to John Callery-Coyne, chief product and technology officer at ReflexAI, a company that sells AI-powered training and quality assurance tools for crisis hotlines, emergency responders, and customer service centers.

The key to success: clear goals, internal champions

A more effective approach to agentic AI deployment focuses on clear goals, set budgets, and with strong internal champions — often supported by external vendors who deliver fast, measurable ROI, Callery-Coyne said. “Regardless of approach, it’s essential that the AI tools are deployed within regular workflows so there is sustained utilization that drives long-term results,” he said.

Despite early hurdles, agentic AI does mark a major leap for genAI, enabling smarter automation, efficiency, and innovation beyond traditional bots, Gartner said in its report. The research firm predicts at least 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from zero in 2024. In addition, a third of enterprise software applications will include agentic AI by 2028, up from less than 1% in 2024.

Still, most agentic AI projects are “early, hype-driven experiments” that often obscure the true cost and complexity of scaling the tools, delaying adoption, Verma said. Instead, organizations need to look past the hype and choose AI use cases wisely.

Neeraj Abhyankar, VP, Data & AI at R Systems, a digital product engineering and technology services firm, agreed. Many agentic AI projects fail, he said, because companies chase hype without understanding the business value or preparing workflows, skills, and data.

“Similar to genAI, many companies have launched [proofs of concept] in isolation from core business workflows,” Abhyankar said. “As a result, these systems are solving pristine, ideal situations rather than practical scenarios teams actually face. With agentic AI, the stakes are higher. These systems not only require accurate generation, but also reasoning, decision-making, and action.”

Success with agentic AI starts by embedding the tools in existing workflows and involving employees to build trust “and get buy-in,” he said. “When done this way, organizations can scale their agentic AI with confidence.”

In concert with all the hype around agentic AI, many vendors are “agent washing” old tools such as chatbots and robotic process automation tools. Gartner says only about 130 agentic AI vendors out of thousands are legit.

“Most agentic AI propositions lack significant value or return on investment (ROI), as current models don’t have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time,” Verma said.

In this early stage, Gartner recommends agentic AI only be pursued where it delivers clear value. Integrating agents into legacy systems can be technically complex, often disrupting workflows and requiring costly modifications. In many cases, rethinking workflows with agentic AI from the ground up is the ideal path to success.

“To get real value from agentic AI, organizations must focus on enterprise productivity, rather than just individual task augmentation,” Verma said. “They can start by using AI agents when decisions are needed, automation for routine workflows and assistants for simple retrieval. It’s about driving business value through cost, quality, speed and scale.” 
https://www.computerworld.com/article/4016206/nearly-half-of-agentic-ai-projects-will-be-killed-by-2...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Jul, Mon 7 - 22:17 CEST