Six months ago I watched one of the strongest engineers I’ve worked with spend 40 minutes manually writing a data transformation that Copilot would have drafted in 90 seconds. When I mentioned it afterward, he said: “I know what it can do. I just don’t trust it.”
He’s not alone. In my experience, the engineers most resistant to AI tooling aren’t junior folks worried about their jobs — it’s the senior engineers who’ve seen enough to be suspicious.
They’re not wrong to be suspicious. They’re also not entirely right.
Table of Contents
- Why They’re Skeptical
- The Hype Cycle Argument
- The Debugging Argument
- The Maintenance Burden Argument
- Where the Skeptics Are Wrong
- What Actually Works for Skeptics
- The Uncomfortable Bottom Line
Why They’re Skeptical
Senior engineers aren’t contrarian for sport. Their skepticism is earned.
They’ve watched blockchain, microservices, serverless, NoSQL, and a dozen other technologies get declared paradigm shifts — and then watched the industry spend years cleaning up the messes those shifts created. They’ve learned to wait out hype cycles. They’ve learned that the cost of adopting something too early often outweighs the benefit.
And they’ve spent enough years in production systems to know that the hard part of software isn’t writing the code. It’s maintaining it, debugging it at 2 AM, understanding what it does when the person who wrote it has left the company.
From that vantage point, AI tools look like they’re solving the easy parts of the job.
The Hype Cycle Argument
“We’ve seen this before” is a legitimate pattern-match. The senior engineers raising this objection are usually right that the hype outpaces the reality.
Where they’re partially right: the current AI tooling has real limitations that the marketing materials don’t emphasize. Context windows have limits. Models hallucinate. The code looks confident even when it’s wrong. Over-reliance on AI suggestions in unfamiliar domains creates code that nobody fully understands — including the person who wrote it.
Where this argument breaks down: AI coding tools aren’t in the same category as blockchain. The productivity gains are measurable and reproducible across enough different contexts that “this is just hype” doesn’t hold up. The question isn’t whether these tools provide value. The question is where and how much.
Skepticism without engagement becomes the same intellectual trap as credulity without scrutiny.
The Debugging Argument
This one I take seriously. Real debugging — the kind where something is wrong in production and you need to trace it through six layers of abstraction — requires holding a mental model of the entire system. You need to understand what should happen, what’s actually happening, and where those diverge.
AI tools are not good at this. They’re good at pattern-matching on code snippets. They’re bad at “here’s the call stack from this weird race condition in the message queue, what’s happening?”
Senior engineers are right that debugging is underrated as a skill, and right that over-relying on AI for code generation creates code that’s harder to debug. If you don’t understand the code you wrote, you’ll have a very bad time when it breaks.
I covered the debugging limitation in more depth in AI Debugging Production — the short version is that AI is useful for the 80% of bugs that are obvious, not for the 20% that matter most.
The Maintenance Burden Argument
This is the strongest skeptic argument and the one that gets dismissed too quickly.
AI-generated code tends to be verbose. It adds error handling you didn’t ask for, abstractions that seem reasonable in isolation but don’t fit your codebase conventions, and sometimes just more code than the problem requires. More code = more maintenance surface.
I’ve watched teams where junior engineers are using AI for everything and the codebase is slowly becoming inconsistent — different patterns side by side, no clear reason for the divergence. The engineers who have to maintain that code five years from now are going to curse every AI-generated shortcut.
This is a real cost. The AI coding standards post I wrote tries to address it directly — you need standards for AI-generated code, not just trust that it’ll be fine.
Where the Skeptics Are Wrong
Here’s the uncomfortable part: most senior engineers who are skeptical of AI tools haven’t actually used them seriously for more than a few weeks.
They tried it, it generated something bad, they concluded it wasn’t useful, and they moved on. That’s a reasonable response to most new tools. But AI coding assistants have a learning curve — not in the tool itself, but in how you use it. Prompt quality matters. Knowing what to ask for and what to verify matters. Senior engineers who put in the time to learn the tool tend to revise their opinion.
The second failure mode: applying the tool to the wrong problems. Senior engineers try AI on the complex, nuanced work that requires deep system understanding — and it fails, as expected. They don’t try it on the 30% of their work that’s genuinely mechanical: parsing configs, writing boilerplate, drafting docs, translating between formats. That’s where the ROI is clearest.
Refusing to engage at all isn’t skepticism. It’s opting out of a productivity shift that is happening whether you participate or not. In two years, the engineers who’ve built fluency with these tools will have a measurable advantage on certain classes of work. Holding out on principle doesn’t make the advantage disappear — it just means it accrues to someone else.
What Actually Works for Skeptics
The frame that seems to resonate with skeptical senior engineers: AI as a first draft, not a finished product.
Don’t ask it to write your code. Ask it to write a starting point that you’ll then review, rewrite, and own. If you’re uncomfortable with that, ask it to explain an unfamiliar library before you write the code yourself. Or have it generate the test cases for an edge case you want to verify.
These are entry points that don’t require trusting the output — they require treating the output the way you’d treat a suggestion from a smart but junior colleague. Potentially useful, definitely worth checking.
The engineers who’ve found the most value tend to be the ones who stayed skeptical about individual outputs while remaining curious about the overall tool.
The Uncomfortable Bottom Line
Senior engineers are right that AI tools have real limitations. They’re right that the hype outpaces the reality. They’re right that AI-generated code can create maintenance problems.
Where they go wrong is treating “there are real limitations” as a reason not to develop fluency with the tool. Every powerful tool has limitations. The engineers who understand those limitations deeply are the ones who use the tool most effectively — not the ones who use it uncritically, but also not the ones who opt out entirely.
The strongest skeptics I’ve seen come around eventually. Usually after a junior engineer on their team ships something faster than they expected and the quality holds up. “How’d you do that so quickly?” is the beginning of a useful conversation.