why the 'hand coding' backlash is really about agency, not nostalgia
There is a post making the rounds on Hacker News today that caught my attention for exactly the right reasons. The title is deliberately provocative: "I'm going back to writing code by hand." It hit 900+ points in a few hours, and the comments section is exactly what you would expect. People saying "this is Luddite nonsense." People saying "finally someone said it." People arguing about whether the author is a bad engineer or the only honest one in the room. Both sides are missing the point, I think. The piece is not really about whether to use AI. It is about something more subtle: the feeling that you can no longer tell what your tools are doing to your work. That is not nostalgia. That is a real signal. The article makes a short, honest argument. The author describes using AI heavily for code generation, getting comfortable with it, and then noticing an uncomfortable pattern: they were spending less time understanding the code they owned. Diffs were getting larger and less familiar. Edge cases were getting missed. The review process felt shallower because the output looked polished even when it was wrong. The solution the author chose was to go back to writing code by hand for a while. Not forever. Not as a moral stance. As a recalibration. That is not Luddism. That is noticing that your feedback loop has degraded and taking steps to fix it. The pushback on HN is mostly about scale: "you cannot reject productivity gains just because they feel uncomfortable." And sure, at scale, that argument has teeth. But the individual experience is real, and I think it points to something the industry is not talking about enough. I do not think the backlash is against AI, or even against code generation. I think it is against indistinguishability — the state where you can no longer reliably tell the difference between output that is correct and output that just looks correct. This is a problem that has empirical backing now. There is a paper from arXiv that came out alongside the hand-coding post — "LLMs Corrupt Your Documents When You Delegate" — that studied what happens when you delegate document processing to an LLM. The finding is straightforward and unsettling: delegated workflows systematically introduce errors. Not random noise. Systematic, hard-to-detect corruption of the content being processed. The paper is about documents, but the pattern applies directly to code. When you delegate code generation to an AI and then review the output, you are operating in a mode where: The output is polished enough to pass a quick scan. The errors are not random typos. They are logical gaps, missing edge cases, incorrect assumptions that are internally consistent. The review process is asymmetrical: the model produced the output in seconds, but catching its mistakes takes as long as writing the code yourself would have taken. The cost of a missed error is not in the generation step. It is in the production incident three weeks later. The hand-coding author is not the first person to notice this. But the combination of the personal account and the empirical paper makes the pattern harder to dismiss as "just someone being resistant to change." There is another data point that fits the pattern. The New York Times reported this week that Meta's embrace of AI is making employees miserable — 455 points on HN, heavy discussion. The story describes engineers at Meta burning out because AI adoption was mandated from the top, not enabled from within. The details matter: engineers were being evaluated on AI tool usage metrics. Teams were pressured to show AI adoption numbers. What started as a productivity initiative turned into a compliance exercise. And the result was not better engineering. It was fatigue, resentment, and a growing sense that the work had become about feeding outputs rather than building systems. I have seen versions of this dynamic in smaller teams too. When AI adoption becomes a metric that management tracks, the incentives go wrong fast. People stop asking "is this tool making our system better?" and start asking "how do I make my AI usage chart go up?" The Meta story is a case study in what happens when you optimize for the wrong signal. Engineers notice. They get demoralized. The AI tooling becomes overhead with extra steps. I keep coming back to the concept of agency because I think it is the frame that makes sense of all three signals — the hand-coding post, the LLM corruption paper, and the Meta burnout story. Agency, in this context, means: You understand what the code you ship actually does. You can trace a production issue back to its cause without guessing. You know when to trust the tool and when to override it. You are not surprised by what your system does in production. AI-assisted workflows can preserve agency, but it takes deliberate work. The default behavior — generate, skim, merge, move on — erodes agency incrementally. Each diff you do not fully understand is a small loss. Each review where you trust the output because it looks clean is a bet that may or may not pay off. The teams that maintain agency share a few patterns: They do not let the agent touch unfamiliar code. If nobody on the team deeply understands a module, the agent is not going to help. It will produce plausible changes that compound confusion. They review agent output with more scrutiny, not less. The more confident the output looks, the more carefully they read it. They have learned that polished wrong is more dangerous than obviously wrong. They separate generation from review, and enforce a gap. They write with the agent, then review without it. The review session does not have the generation context open. They look at the diff as if someone else wrote it. They maintain a manual subset of the codebase. Infrastructure configs. Authentication logic. State management. The parts where being wrong is expensive. These are written by hand and reviewed by multiple people. Agents can suggest, but they do not author. They track rework, not output. Rework ratio is the signal that tells you whether agency is eroding. If code you generate needs more fixes than code you write, you have a problem that generation speed does not solve. the second-order effect nobody is modeling Here is the part that worries me the most. When you stop writing code by hand and start reviewing generated code, you are not just changing how code gets written. You are changing how you learn. Writing code by hand is a learning mechanism. Every time you type out a loop, a conditional, a state transition, a retry strategy, you are reinforcing your mental model of the system. You are building the intuition that tells you "this looks right" or "something is off here." When you delegate the writing, you lose that reinforcement loop. The concern is not that you stop being able to write code. It is that your intuition for what good code looks like atrophies. You stop seeing the patterns of what makes a function robust. You stop noticing when an abstraction is wrong because you did not write it and feel its weight. This is the pattern Sean Goedecke wrote about in "Software engineering may no longer be a lifetime career" — AI tooling may de-skill engineers the same way physical labor de-skills construction workers over time. Not because the tools are bad. Because the tools change what the body learns. The hand-coding backlash is, I think, a response to this. Not a rejection of AI. A refusal to let the learning loop close. The way I read the current moment is this: A significant number of experienced engineers are starting to feel something they cannot quite articulate. They are using AI tools, getting real productivity gains, and also feeling like something is slipping. The code they produce is not worse — but their relationship to it is different. More distant. Less intimate. The "going back to hand coding" post is one person articulating that feeling. The LLM corruption paper is the empirical version of the same intuition. The Meta burnout story is what happens when organizations ignore the feeling and push harder. None of these signals say "AI is bad." They say "the mode of work has changed, and we have not yet figured out how to preserve agency in the new mode." That is a solvable problem. But it requires accepting that the problem exists, rather than dismissing the people who feel it as resistant to progress. If you are an engineer reading this and wondering whether you should "go back to hand coding," I do not think the answer is binary. The answer is probably: Use AI for what it is good at: boilerplate, repetitive patterns, well-scoped generation, documentation drafts, code you already know how to write. Protect what makes you good: manual writing for unfamiliar domains, deep review of generated changes, time to understand the system without an assistant in your ear. Measure the right thing: not how many diffs you generate, but how many of them survive the next quarter without needing rework. The hand-coding backlash is not a rejection of the future. It is a signal that the future needs better scaffolding. And the engineers who figure out how to build that scaffolding — for themselves and their teams — are going to be the ones who stay effective in both modes. Because the goal was never to write the most code in the shortest time. The goal was to build systems that work, that last, and that you understand well enough to fix at 3 AM. Everything else is just tooling. I'm going back to writing code by hand — 900+ points HN LLMs Corrupt Your Documents When You Delegate — empirical study, arXiv Meta's embrace of AI is making employees miserable — NYT Software engineering may no longer be a lifetime career — Sean Goedecke
