Navigation
Search
|
GenAI can make us dumber — even while boosting efficiency
Monday February 17, 2025. 12:00 PM , from ComputerWorld
Generative AI (genAI) tools based on deep learning are quickly gaining adoption, but their use is raising concerns about how they affect human thought.
A new survey and analysis by Carnegie Mellon and Microsoft of 319 knowledge workers who use genAI tools (such as ChatGPT or Copilot) at least weekly showed that while the technology improves efficiency, it can also reduce critical thinking engagement, could lead to over-reliance, and might diminish problem-solving skills over time. “A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study found. Overall, workers’ confidence in genAI’s abilities correlates with less effort in critical thinking. The focus of critical thinking shifts from gathering information to verifying it, from problem-solving to integrating AI responses, and from executing tasks to overseeing them. The study suggests that genAI tools should be designed to better support critical thinking by addressing workers’ awareness, motivation, and ability barriers. The research specifically examines the potential impact of genAI on critical thinking and whether “cognitive offloading” could be harmful. Cognitive offloading, or the process of using external devices or processes to reduce mental effort, is not new; it’s been used for centuries. For example, something as simple as writing things down, or relying on others to help with remembering, problem-solving, or decision-making is a form of cognitive offloading. So is using a calculator instead of mental math. The paper examined how genAI’s cognitive offloading, in particular, affects critical thinking among workers across various professions. The focus was on understanding when and how knowledge workers perceive critical thinking while using genAI tools and whether the effort required for critical thinking changes with their use. The researchers classified critical thinking into six categories: knowledge, comprehension, application, analysis, synthesis, and evaluation. Each of those six cognitive activities was scored with a one-item, five-point scale, as has been done in similar research. The study found that knowledge workers engage in critical thinking primarily to ensure quality, refine AI outputs, and verify AI-generated content. However, time pressures, lack of awareness, and unfamiliarity with domains can hinder reflective thinking. At college, signs of a decline in thinking abilities David Raffo, a professor at the Maseeh College of Engineering and Computer Science at Portland State University, said he noticed over a six-year-period that students’ writing skills were dropping. “Year after year, the writing got worse,” he said. “Then, during Covid, I noticed that papers started getting better. I thought, maybe staying at home had a positive effect. Maybe students were putting more energy and effort into writing their papers and getting better at their communication skills as a result.” Raffo met with one student to discuss their A- grade on a paper. During the Zoom meeting, however, the student struggled to form grammatically correct sentences. Raffo began to question whether they had written the paper themselves, considering their communication skills didn’t match the quality of their work. “I wondered if they had used a paid service or generative AI tools. This experience, about three years ago, sparked my interest in the role of technology in academic work and has motivated my ongoing study of this topic,” said Raffo, who is also editor-in-chief of the peer-reviewed Journal of Software Evolution and Process. The difference between using genAI compared to the use of calculators and Internet search engines lies in which brain functions are engaged and how they affect daily life, said Raffo, who was not involved in the latest study. GenAI tools offload tasks that involve language and executive functions. The “use it or lose it” principle applies: engaging our brains in writing, communication, planning, and decision-making improves these skills. “When we offload these tasks to generative AI and other tools, it deprives us of the opportunity to learn and grow or even to stay at the same level we had achieved,” Raffo said. How AI rewires our brains The use of technology, in general, rewires brains to think in new ways — some good, some not so good, according to Jack Gold, principal analyst at tech industry research firm J. Gold Associates. “It’s probably inevitable that AI will do the same thing as past rewiring from technology did,” he said. “I’m not sure we know yet just what that will be.” As Agentic AI becomes common, people may come to rely on it for problem-solving — but how will we know it’s doing things correctly, Gold said. People might accept its results without questioning, potentially limiting their own skills development by allowing technology to handle tasks. Lev Tankelevitch, a senior researcher with Microsoft Research, said not all genAI use is bad. He said there’s clear evidence in education that it can enhance critical thinking and learning outcomes. “For example, in Nigeria, an early study suggests that AI tutors could help students achieve two years of learning progress in just six weeks,” Tankelevitch said. “Another study showed that students working with tutors supported by AI were 4% more likely to master key topics.” The key, he said, is that it was teacher-led. Educators guided the prompts and provided context, showing how a collaboration between humans and AI can drive real learning outcomes, according to Tankelevitch. The Carnegie Mellon/Microsoft study determined the use of genAI tools shifts knowledge workers’ critical thinking skills in three main ways: from information gathering to verification, from problem-solving to integrating AI responses, and from task execution to task stewardship. While genAI automates tasks such as information gathering, it also introduces new cognitive tasks, such as assessing AI-generated content and ensuring accuracy. That shift changes the role of workers from doing the work of research to overseeing results, with the responsibility for quality still resting on the human. Pablo Rivas, assistant professor of Computer Science at Baylor University, while it’s true if a machine’s output goes unchecked, you risk skipping the hard mental work that sharpens problem-solving skills, AI doesn’t have to undermine human intelligence. “It can be a boost if individuals stay curious and do reality checks. One simple practice is to verify the AI’s suggestions with outside sources or domain knowledge. Another is to reflect on the reasoning behind the AI’s output rather than assuming it’s correct,” he said. “With healthy skepticism and structured oversight, generative AI can increase productivity without eroding our ability to think on our own.” A right way to use genAI? To support critical thinking, organizations training workforces should focus on information verification, response integration, and task stewardship, while maintaining foundational skills to avoid overreliance on AI. The study highlights some limitations, such as potential biases in self-reporting and the need for future research to consider cross-linguistic and cross-cultural perspectives and long-term studies to track changes in AI use and critical thinking. Research on genAI’s impact on cognition is key to designing tools that promote critical thinking. Deep reasoning models are helping by making AI processes more transparent, allowing users to better review, question, and learn from its insights, he said. “Across all of our research, there is a common thread: AI works best as a thought partner, complementing the work people do,” Tankelevitch said. “When AI challenges us, it doesn’t just boost productivity; it drives better decisions and stronger outcomes.” The Carnegie Mellon-Microsoft study isn’t alone in its findings. Verbal reasoning and problem-solving skills in the US have been steadily dropping, according to a paper published in June 2023 by US researchers Elizabeth Dworak, William Revelle and David Condon. And while IQ scores had been increasing steadily since the beginning of the 20th century — as recently as 2012, IQ scores were rising about 0.3 points a year — a study by Northwestern University in 2023 showed a decline in three key intelligence testing categories. All technology affects our abilities in various ways, according to Gold. For example, texting undermines the ability to write proper sentences, calculators reduce long division and multiplication skills, social media affects communication, and a focus on typing has led to neglecting cursive and signature skills, he noted. “So yes, AI will have effects on how we problem solve, just like Google did with our searches,” Gold said. “Before Google, we had to go to the library and actually read multiple source materials to come up with a concept, which required our brain to process ideas and form an opinion. Now it’s just whatever Google search shows. AI will be the same, only accelerated.”
https://www.computerworld.com/article/3824308/genai-can-make-us-dumber-even-while-boosting-efficienc...
Related News |
25 sources
Current Date
Feb, Fri 21 - 19:30 CET
|