|
Navigation
Search
|
Software development has a ‘996’ problem
Monday November 24, 2025. 10:00 AM , from InfoWorld
There is a persistent, dangerous myth in software development that output equals outcome. It’s the idea that if we just throw more hours or more lines of code at the problem, we will inevitably win.
Gergely Orosz of The Pragmatic Engineer fame recently dismantled this myth with surgical precision. Orosz made a damning observation regarding the “996” work culture, the schedule of working 9 a.m. to 9 p.m., six days a week, popularized by Chinese tech giants: “I struggle to name a single 996 company that produces something worth paying attention to that is not a copy or rehash of a nicer product launched elsewhere.” The schedule and pace aren’t merely inhumane. They’re counterproductive. Brute force gives you volume but rarely gives you differentiation and (perhaps) never gives you innovation. Now, before we start tsk-tsking China for such 996 practices, we’d do well to examine our own. Founders sell it as being “hardcore,” “all in,” or “grind culture,” but it’s the same idea: Crush people with hours and hope something brilliant falls out the other side. And now we’re trying to instantiate this idea in code or, rather, GPUs. Some assume that if we can just get large language models (LLMs) to work the equivalent of thousand-hour weeks, generating code at superhuman speeds, we’ll magically get better software. We won’t. We will just get more of what we already have: derivative, bloated, and increasingly unmanageable code. The high cost of code churn I’ve been sounding this alarm for a while. Recently I wrote about how the internet is being choked by low-value, high-volume content because we’ve made it frictionless to produce. The same is happening to our software. We have data to back this up. As I noted when covering GitClear’s 2024 analysis of 153 million lines of code, “code churn,” or lines that are changed or thrown away within two weeks, is spiking. The research showed more copy-pasted code and significantly less refactoring. In other words, AI is helping us code faster (up to 55% faster, according to GitHub’s analysis), but it isn’t helping us build better. We are generating more code, understanding it less, and fixing it more often. The real risk of AI isn’t that it writes code, but that it encourages us to write too much code. Bloated codebases are harder to secure, harder to reason about, and much harder for humans to own. Less code is better. This is the 996 trap transferred to machines. The 996 mindset assumes that the constraint on innovation is the number of hours worked. The “AI-native” mindset assumes the constraint is the number of characters typed. Both are wrong. The constraint has always been and will always be clarity of thought. Code is a liability, not an asset Let’s get back to first principles. As any senior engineer knows, software development is not a typing contest. It is a decision-making process. The job is less about writing code and more about figuring out what code not to write. As Honeycomb founder and CTO Charity Majors puts it, being a senior software engineer “has far more to do with your ability to understand, maintain, explain, and manage a large body of software in production over time, as well as the ability to translate business needs into technical implementation.” Every line of code you ship is a liability. Every line must be secured, debugged, maintained, and eventually refactored. When we use AI to brute-force the “construction” phase of software, we maximize this liability. We create vast surface areas of complexity that might solve the immediate Jira ticket but mortgages the future stability of the platform. Orosz’s point about 996 companies producing copies is telling. Innovation requires the “slack” to think without the constant interruptions of meetings. Given a quiet moment, you might realize that the feature you were about to build is actually unnecessary. If your developers are spending their days reviewing an avalanche of AI-generated pull requests, they have no slack. They are not architects; they are janitors cleaning up after a robot that never sleeps. None of this is to suggest that AI is bad for software development. Quite the opposite is true. As Harvard professor (and longtime open source luminary) Karim Lakhani stressed, “AI won’t replace humans,” but we increasingly will see that “humans with AI will replace humans without AI.” AI is an effective tool, but only if we use it as a tool and not as a club to replicate the false promise of the 996 culture. The human part of the stack So, how do we avoid building a 996 culture on silicon? We need to stop treating AI as a “developer replacement” and start treating it as a tool to buy back the one thing 996 culture destroys: time. If AI can handle the drudgery—the unit tests, the boilerplate, the documentation updates—that should not be an excuse to jam more features into the sprint. It should be an opportunity to slow down and focus on the “human” parts of the stack, such as: Framing the problem. “What are we actually trying to do?” sounds simple, but it is where most software projects fail. Choosing the right problem is a high-context, high-empathy task. An LLM can give you five ways to build a widget; it cannot tell you if the widget is the wrong solution for the customer’s workflow. Editing ruthlessly. If AI makes writing code nearly free, the most valuable skill becomes deleting it. Humans must own the “no.” We need to reward developers not for the velocity of their commits, but for the simplicity of their designs. We need to celebrate the “negative code” commits: the ones that remove complexity rather than adding to it. Owning the blast radius. When things break (and they will!) it’s your name on the incident report, not the LLM’s. Understanding the system deep enough to debug it during an outage is a skill that degrades if you never write the code yourself. We need to ensure that “AI-assisted” doesn’t become “human-detached.” I’ve stressed the importance of ensuring that junior developers don’t default to whatever an LLM gives them. We need to ensure adequate training so that engineers of all skill levels can effectively use AI. The rebellion against robot drivel is not about Luddism. It is about quality. Orosz’s critique of 996 is that it produces exhausted people and forgettable products. If we aren’t careful, our adoption of AI will produce the exact same thing: exhausted humans maintaining a mountain of forgettable, brittle code generated by machines. We don’t need more code. We need better code. And better code comes from human minds that have the quiet, uncluttered space to invent it. Let the AI handle the brute force, freeing up people to innovate.
https://www.infoworld.com/article/4094801/software-development-has-a-996-problem.html
Related News |
25 sources
Current Date
Nov, Mon 24 - 12:55 CET
|







