Navigation
Search
|
Burned out by bots: The rise of prompt fatigue in the workplace
Wednesday September 17, 2025. 01:00 PM , from ComputerWorld
Generative artificial intelligence (genAI) tools have quickly become commonplace in the workplace. In the rush to boost productivity, many organizations have adopted them without fully considering how best to integrate them into daily workflows.
Some employees are now complaining of “prompt fatigue” — a kind of cognitive drain similar to the “Zoom fatigue” of the pandemic era — stemming from the ongoing pressure to continuously craft, refine, and optimize prompts for these tools. Is this a real thing? Providing context for why prompt fatigue may be taking hold, Leslie Joseph, principal analyst at Forrester, explains that the dominant paradigm for knowledge work since the rise of the internet has been what he calls “find and assemble.” “If you want to go get an understanding [of something], your first place to go is Google. You search, you collect all of those things, you stitch them together, and you assemble an output,” Joseph says. But with the advent of large language models (LLMs), Joseph says the paradigm has changed to “query and refine,” a process that can be more frustrating. “Now you have this all-powerful AI that knows a lot more than you, and is also very good at stitching and assimilating all this information into something coherent that you can use, but it’s also unreliable,” he says. As a result, knowledge workers must constantly go back and forth, re-architecting prompts and probing the LLM from various perspectives. Joseph says this breaks the flow of work necessary for deep engagement and better outcomes. He adds a caveat: this frustration has mainly been observed in studies focused on specific professions, particularly software engineers. It’s not yet clear whether the same pattern holds across all types of knowledge work. Ramprakash Ramamoorthy, director of AI research at ManageEngine, a Zoho subsidiary focused on IT management, shares a similar view. According to him, prompt fatigue stems from three recurring challenges: deciding which LLM to use (it’s sometimes necessary to use more than one), determining which prompt to provide, and refining that prompt until the desired answer emerges. It also doesn’t help that most LLMs lack a safety mechanism for uncertainty. “These LLMs are designed in such a way that they never say, ‘I do not know.’ They will tell you that you are right, and ‘I know that this is the answer,’” says Ramamoorthy. Although definitions vary and further research is needed to fully understand the scope of prompt fatigue, one thing is clear: knowledge workers and companies can improve their integration and use of genAI tools. The cost of thinking less with AI One consequence of prompt fatigue is that it can undermine productivity rather than enhance it. Ramamoorthy points to cases where efficiency drops — for instance, when someone prompts multiple LLMs to resolve a ticket that would have been faster to handle manually. According to Aaron McEwan, vice president, advisory at Gartner, research suggests that the impact of genAI on productivity is based on experience. “Junior employees can benefit quite significantly from using AI tools to support their work. Whereas with more experienced and tenured employees — it actually might slow them down and make it harder,” says McEwan. Indeed, a small study conducted by Model Evaluation & Threat Research recently found that AI slowed down seasoned developers by 19%, even though they thought they were working faster. Despite this nuance, organizations often push AI adoption across the board, without regard to seniority or skill level. And there’s an additional risk for knowledge workers who need to build expertise. While they can learn a lot from using genAI tools, heavy reliance on these tools can hinder deep thinking. McEwan offers a cognitive perspective: “If we’re shortcutting to answers without doing some of the critical thinking, we may not be creating the kind of rich neuronal connections that actually lead to deeper learning and things like expertise and wisdom,” he says. Ramamoorthy echoes this concern, comparing it to how people once memorized phone numbers — a habit now largely lost in the age of smartphones. “The very same thing is happening in the LLM world as well: I am losing my ability to draft a response to the ticket because I am over-reliant on an LLM,” he says. To reduce this over-reliance, Ramamoorthy suggests that users first recognize when they’re no longer thinking independently. From there, they must clearly define where LLM support is helpful and where it isn’t. He also advises users to identify which tools are best suited to their needs, such as having one LLM for general queries and another for technical or engineering questions. Many organizations face broader risks. McEwan points to the legal industry, which is already grappling with an aging workforce, a challenge that may be compounded by AI tools replacing opportunities for foundational skill-building. “There’s a really big concern in legal circles that junior lawyers will not develop the critical skills and analysis capability that’s needed to operate as a tenured, experienced lawyer. They’re actually worried about what happens in the middle… AI might be eroding the ability for junior lawyers to make their way through to effective senior lawyers,” he says. The risk isn’t just lost training — it’s lost purpose. McEwan argues that productivity is more than just efficiency. “The other side of productivity is what we would call value creation — it’s delivering value. So just because you’re doing something faster and more efficiently, if it’s the wrong thing, it’s not actually adding value to the organization,” McEwan says. In addition to rethinking how they define productivity, companies must also consider long-term talent development. “They need to understand where employees might not be gaining the type of experiences that will enable them to deliver higher value creation in the future,” McEwan says. The weakening of ‘weak ties’ There is also a risk to our social abilities, says Julia Freeland Fisher, director of education research at the Clayton Christensen Institute, a think tank founded on the research of the late Harvard Business School Professor Clayton Christensen. According to Fisher, this growing reliance on genAI tools is weakening our “weak ties” in the workplace — the distant colleagues we may not work with every day but who remain part of our work ecosystem. “It’s a hallmark of disruption: People are turning to those bots for the convenience and on-demand access that their colleagues can’t offer,” she says. What’s the cost of fewer conversations at the proverbial water cooler or no longer pinging co-workers on Slack for quick questions? Fisher warns that the first consequence is a loss of access to information and to the social capital that flows through our networks. “The second is that it atrophies our ability to connect, collaborate, and work in teams, because we become increasingly accustomed to a sort of sycophantic bot relationship that’s frictionless and that meets all our needs,” she says. This dynamic can erode an organization’s leadership pipeline and stifle innovation. Fisher explains that innovation often stems from cross-pollination across teams. “If you’re using AI in ways that are more efficient, but that actually increase the silos versus breaking them down across your teams, you’re going to see less of that cross-team innovation,” she says. Reduced communication with weak ties outside the company — such as employees at a strategic partner — can have similar effects. Fisher points to research showing that one of the strongest predictors of who becomes an inventor in American society is exposure to others already in the innovation economy. In other words, when employees are cut off from those doing innovative work, they’re less likely to drive innovation themselves. This isolation may only worsen as the anthropomorphization of genAI tools increases user engagement. “We’re walking this tightrope where the better the technology gets from a user-experience standpoint, the worse the social risk gets,” she says. Yet it’s difficult for companies to change this dynamic, due to what Fisher describes as a classic innovator’s dilemma: short-term benefits overshadow the long-term costs, leading to underinvestment in low-margin but high-potential areas such as talent development. Rather than defaulting to tools that promote isolation, Fisher suggests exploring a new crop of AI products designed to foster human connection. She cites Boardy, Series, and Climb Together as promising examples. Still, the greatest responsibility may fall on individual employees. Fisher encourages a mindset of active growth and engagement. “Every time I’m leaning on AI, I should also be looking for thought partnership with my colleagues and extended networks and building new connections,” she says. Kirill Perevozchikov, CEO of White Label PR, a public relations agency focused on gaming and entertainment, offers an anecdotal example. Although his firm is remote-first, he believes in-person communications are becoming even more critical in the wake of genAI. To build and maintain relationships, his team prioritizes meeting journalists in person at industry conferences and events. “It’s really important to see journalists face-to-face, so they will see that the person behind the emails that they’re getting is not an AI. It’s actually a real human being and you can have a beer with them,” he says. The illusion of progress LLMs may also create false expectations around pace and progress, according to Binny Gill, the founder and CEO of business process automation vendor Kognitos. While traditional engineering workflows are primarily linear, Gill says, genAI offers a rapid initial acceleration that can give the impression that a task is nearly complete, pushing employees to move ahead prematurely. “And while you do that, AI goes and messes up something that it had already done in the past. And then it’s like, ‘When will I ever go to 100?’ You take one step forward, two steps back. Then I’m struggling, struggling, struggling,” he says. To Gill, this frustration is the source of prompt fatigue. To address this problem, Gill encourages his developers to “trust but verify.” Rather than assigning a sprawling, monolithic deliverable like a finished app or report, he advises breaking it down into smaller components that can be validated incrementally. “If you break it down but let the AI do it, then frustration does not happen as much,” he says, because each component can be vetted — and fixed if needed — without affecting other components. Some best practices mirror how humans solve problems. When people are stuck on a difficult task, they often benefit from taking a break or sleeping on it. Gill recommends taking a similar approach with genAI: open a new chat window, keep the previous work for reference, and let the model reprocess the project with a fresh context. “AI now starts understanding the project all over again from scratch, [so you can] say, ‘Describe it to me.’ And while it’s describing it as if it were looking at it for the first time, it can notice certain issues,” he says. In some cases, even that may not resolve the issue. Gill suggests switching between models, such as moving from Claude to Gemini Pro, because each model “thinks” differently. “What one model cannot solve, the other can solve. Even with humans, we do that. If one human cannot [solve a problem], go talk to somebody else, and maybe you’ll get an idea,” he says. Gartner’s McEwan believes that these best practices should not only be taught in classroom settings but also reinforced through on-the-job training and coaching. “This is where the value of more experienced employees comes in: Being able to engage those employees to coach junior employees around some of the outputs, and check their work for accuracy — that’s going to involve more hands-on coaching than just sitting in a classroom,” McEwan says. Leading through the genAI transition Beyond individual adaptation, companies ultimately bear the greatest responsibility for addressing AI-related challenges. Forrester’s Joseph emphasizes that a deeper organizational mindset shift is essential. “A lot of organizations try to put productivity metrics around what is essentially psychological transformation. Unless it is actually dealt with at an organizational as well as individual level — as a psychological change as well as a structural change — it’s not going to work,” he says. Using developers as an example, Joseph notes that many have shifted from traditional integrated development environments (IDEs) to AI-assisted development tools. “Are there enough internal team-level conversations and knowledge sharing about how that process is going, not just from a mechanics point of view, but also what toll it is taking on people’s engagement with their job?” he says, emphasizing the need to recognize and learn from those who successfully integrate AI tools into their workflows. The AI model vendors also share responsibility. Gill warns that many are overpromising functionality that doesn’t hold up in real-world use. “There is a lot of smoke and mirrors in the market right now,” he says, referring to ‘AI-washing,’ where polished demos don’t translate to working solutions in production. He draws a parallel to the early automotive industry. “My advice to most AI companies is don’t go for self-driving on day one. Go with like a steering wheel with a little bit of cruise control… Let humans build the trust for AI and give them the ability to do little by little over time.” White Label PR’s Perevozchikov offers an alternative approach to AI software procurement: his company is tool-agnostic, allowing employees to use AI tools that work best for them, starting with a trial period. “They will decide if they want to stick with it or not. So this is interesting because the employee can try different tools, and then decide what works best for them. Then we just approve this as a company expense and they just continue using it,” he says. This continuous testing creates a show-and-tell dynamic at the company during team calls. “‘Hey, this is some cool stuff I found. Let’s talk about it. Let’s share the best practices,'” he says. Taking a broader view, Perevozchikov sees prompt fatigue as an extension of Zoom fatigue and other strains tied to remote work. “Looking at this broadly as a problem of people being stuck in front of the computer is really helpful. Anything that can get them out in nature, we should encourage. Having new tools that are better is great, but just going outside and touching the grass is really important,” he says.
https://www.computerworld.com/article/4047909/burned-out-by-bots-prompt-fatigue-in-workplace.html
Related News |
25 sources
Current Date
Sep, Thu 18 - 00:57 CEST
|