Navigation
Search
|
Is ChatGPT making us stupid?
Monday January 27, 2025. 10:00 AM , from InfoWorld
Years ago Nicholas Carr argued that Google was making us stupid, that ease of access to information was shortening our attention spans and generally making it hard for us to do “deep reading.” Others worried that search engines were siphoning away readership from newspapers, collecting the cash that otherwise would fund journalism.
Today we’re seeing something similar in software development with large language models (LLMs) like ChatGPT. Developers turn to LLM-driven coding assistants for code completion, answers on how to do things, and more. Along the way, concerns are being raised that LLMs suck training data from sources such as Stack Overflow then divert business away from them, even as developers cede critical thinking to LLMs. Are LLMs making us stupid? Who trains the trainers? Peter Nixey, founder of Intentional.io and a top 2% contributor to Stack Overflow, calls out an existential question plaguing LLMs: “What happens when we stop pooling our knowledge with each other and instead pour it straight into The Machine?” By “The Machine,” he’s referring to LLMs, and by “pooling our knowledge” he’s referring to forums like Stack Overflow where developers ask and answer technical questions. ChatGPT and other LLMs have become “smart” by sucking in all that information from sites like Stack Overflow, but that source is quickly drying up. Stack Overflow had been in decline before the introduction of ChatGPT, GitHub Copilot, and other LLMs, but usage dropped off a cliff when developers started using AI tools in earnest, as Gergely Orosz highlights. “StackOverflow has not seen so few questions asked monthly since 2009!” We’re at the point, he continues, when it’s “safe to assume Stack Overflow needs a miracle for developers to start asking questions again in the same numbers as before.” Don’t count on that miracle. Without a resurrection moment for Stack Overflow and other Q&A sites like Reddit, where will the LLMs get their training data? (Many of these sites now have partnerships with the LLMs and get paid to provide training data.) Nixey asks, “While GPT-4 was trained on all of the questions asked before 2021 [on Stack Overflow], what will GPT-6 train on?” It is, of course, possible that LLMs can start to learn directly from their users rather than needing to train on data on the web. Developer Jared Daines makes this argument, stressing that “the LLMs are being asked all sorts of questions and finding answers with human input,” which could end up being “the best way to train an LLM.” This feels like the best path forward. In my own experience at MongoDB, we work closely with the LLM providers to supply sample code and other training data, and I’m sure other vendors do the same. That still feels like an overly manual process for training the machines. Surely the LLM vendors are figuring out smart ways to get smarter. But this doesn’t mean developers are doing the same. We don’t need no education In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking. It’s clear that LLMs are a big deal for developers and can significantly help make development faster and better. But we need to be careful about short-term pillaging of training data at the expense of long-term efficacy of the LLMs, just as we shouldn’t use LLMs to avoid learning the underlying principles and techniques that enable us to use them wisely.
https://www.infoworld.com/article/3809945/is-chatgpt-making-us-stupid.html
Related News |
25 sources
Current Date
Jan, Wed 29 - 04:04 CET
|