Last Wednesday in our 1-on-1, my most senior engineer — a veteran on the team and consistently our highest performer — said something that’s been rattling around my head since:
“I used to spend my mornings solving interesting problems. Now I spend them reviewing diffs that an AI agent wrote. The code is usually fine. But I feel like a QA person, not an engineer.”
He wasn’t angry. He wasn’t being dramatic. He said it the way you’d describe the weather. Just a fact about how his job had changed.
That same week, a Hacker News thread hit the front page: “Some Uncomfortable Truths About AI Coding Agents.” The top comment, with hundreds of upvotes, made the same point: AI coding agents take the fun out of a software engineer’s job. The machine takes many of the fun parts and leaves the human with more of the unenjoyable parts.
The thread had 400+ comments. It clearly hit a nerve.
The Velocity Paradox
Here’s what makes this hard to talk about in management circles: by every measurable metric, we’re doing great.
Sprint velocity is up roughly 35-40% since we adopted agentic workflows six months ago. Time-to-merge has dropped. Bug escape rate is… about the same, actually, which is its own interesting data point. On paper, my team is crushing it.
But our quarterly engagement survey told a different story. “I find my work intellectually stimulating” and “I’m learning and growing in my role” both dropped significantly. Multiple engineers mentioned in open comments that they feel like “button pushers.”
I’ve never seen velocity go up and engagement go down simultaneously. That’s a new management problem.
What Actually Changed Day-to-Day
Let me be specific about what shifted, because the abstract version (“AI does more work”) hides the texture.
Before agentic workflows: A senior engineer would pick up a ticket, spend 20-45 minutes understanding the problem space, architect a solution in their head, write the code over a few hours, hit a few bugs, debug them (often learning something in the process), write tests, and submit a PR. The whole arc was theirs.
After agentic workflows: The same engineer picks up a ticket, writes a clear prompt describing the problem, watches the agent generate a solution in 3-8 minutes, reviews the diff, catches maybe one or two issues, tweaks the prompt, reviews again, and submits. Total time: 30-45 minutes instead of a few hours.
The productivity gain is real. What’s also real is that the experience went from “I built this” to “I supervised this.” The debugging — which most engineers secretly enjoy because it’s a puzzle — gets handled by the agent. The architectural thinking gets compressed into a prompt. The satisfaction of making something work gets replaced with the mild approval of confirming someone else’s work is correct.
As my engineer put it: “I used to write code. Now I write performance reviews for an AI.”
The Management Trap
The obvious management response is “you should be grateful — you’re more productive and can go home earlier.” I’ve seen this take from executives who’ve never written code. It misses the point entirely.
Most software engineers didn’t choose this career for the productivity metrics. They chose it because building things is intellectually satisfying. The act of writing code — thinking through edge cases, refactoring until it’s clean, that moment when the tests go green — that’s intrinsically motivating.
When you remove the intrinsically motivating parts and replace them with “review this diff,” you haven’t made the job better. You’ve made it different. And the people who were best at the old job may not be the ones who thrive in the new one.
I’ve talked to three other tech leads about this. All three report the same pattern: senior engineers are the most vocal skeptics not because they don’t understand AI, but because they have the most to lose. They’re the ones whose hard-won expertise in debugging, system design, and code quality is being partially automated. Junior engineers, paradoxically, are often more enthusiastic — they never had to earn those skills the hard way, so they don’t feel the loss.
Three Things I’m Actually Trying
I don’t have this figured out. Anyone who says they do is lying. But here’s what I’m experimenting with:
1. Reserving “Agent-Free Zones”
One day a week — we picked Thursdays — nobody uses AI coding agents. Just you, your editor, and the problem. Engineers write the code themselves.
The reaction was surprising. Two engineers love it. One said it’s “like going to the gym for my brain.” Another said it was the first time in months he felt like he was actually engineering instead of reviewing. One engineer thinks it’s stupid and a waste of time. He’s probably right from a pure efficiency standpoint. But efficiency wasn’t the point.
I’m tracking whether Thursday output quality differs from agent-assisted days. Early signal: fewer bugs, but significantly less throughput. The code tends to be more thoughtful — better variable names, more comments, tighter abstractions. Whether that’s worth the productivity hit depends on what you’re optimizing for.
2. Shifting Senior Engineers to Architecture and Design
If the execution layer is increasingly handled by agents, senior engineers should be spending more time on the parts agents are still genuinely bad at: system design, cross-service architecture, performance modeling, and the agentic engineering meta-skill of knowing how to decompose problems for AI agents.
I’ve started giving senior engineers more explicit architecture ownership — not just “you own this service” but “you own the design document, the interface contracts, and the review criteria.” The agent writes the implementation, but the senior engineer defines the box it implements within.
This has been partially successful. Two engineers responded well and say they feel more like “architects” than “QA.” One engineer told me architecture docs are “even more boring than reviewing diffs.” Can’t win them all.
3. Making the Review Process More Intellectually Engaging
Reviewing AI-generated code is boring when it’s “read this diff and confirm it works.” It’s less boring when you’re explicitly looking for things the agent missed: security implications, performance regressions, maintenance burden, whether the code matches your team’s standards.
I rebuilt our review checklist to include higher-order concerns: Does this change make the system harder to understand in six months? Does it introduce coupling we’ll regret? Is the error handling robust or just “happy path plus a catch-all?” These questions require the kind of judgment that makes reviewing feel less like rubber-stamping and more like engineering.
What Worries Me
The pattern I’m seeing isn’t unique to my team. The HN thread, the Reddit discussions, the whisper network at engineering meetups — there’s a growing population of experienced engineers who feel increasingly disconnected from the craft they spent years mastering.
Some will adapt. They’ll find the new work — prompt engineering, architecture, agent orchestration — intellectually satisfying. Some will leave for roles where they can still write code by hand. Some will stay but check out mentally, collecting a paycheck while their engagement craters.
The teams that handle this transition well will be the ones that acknowledge the loss explicitly, rather than pretending that “more productivity” is automatically “better for everyone.” Velocity metrics don’t capture whether your best engineers are quietly updating their resumes.
The Honest Answer
My engineer asked me directly: “Is it going to be like this from now on?”
I told him the truth: probably, yes. The agents will get better, not worse. The percentage of code written by AI will go up, not down. The job of “software engineer” in 2027 will look even less like the job he signed up for in 2018.
But I also told him that the skills that made him great — deep system understanding, the ability to see around corners architecturally, the instinct for when something “feels wrong” in a design — those skills are more valuable now, not less. They’re just deployed differently.
Whether that’s enough to keep him engaged, I honestly don’t know. Ask me in six months.
What I do know is that pretending this isn’t happening — that AI is pure upside with no human cost — is the fastest way to lose the people who make your team actually work. The uncomfortable truth isn’t that AI coding agents are taking engineering jobs. It’s that they’re hollowing out the parts of engineering that made people want the job in the first place.
And that’s a management problem that no amount of velocity improvement can solve.