Traditional pair programming has a problem: it requires two people to be available at the same time, working on the same thing, at the same pace. In practice, this means it happens way less often than it should.
AI pair programming isn’t a perfect replacement — it lacks the creative friction and social accountability of working with another human. But it’s available 24/7, it doesn’t judge your weird variable names, and it’s surprisingly effective when you use it right.
I’ve been treating AI as my default pair partner for about six months now. Here’s the workflow I’ve developed.
The Wrong Way to AI Pair Program
Before I get into what works, let me describe what doesn’t:
The “do it for me” approach:
This isn’t pair programming — it’s delegation. You get back a blob of code you don’t fully understand, and you spend more time reviewing it than you would have spent writing it. I fell into this trap early on and ended up with code in production that I couldn’t debug because I didn’t really know how it worked.
The “autocomplete on steroids” approach: Writing code normally and just accepting whatever the AI suggests. This produces code that’s technically correct but inconsistent — a Frankenstein of your style and the AI’s suggestions. The resulting code is harder to maintain than code you wrote entirely yourself.
The Right Mental Model
The best mental model I’ve found is: treat the AI like a junior-to-mid developer who’s read every Stack Overflow answer ever written.
It knows a lot. It can write code quickly. But it doesn’t understand your specific system, your team’s conventions, or the business context behind your decisions. You’re the senior in the pair — you drive the architecture, the AI handles the boilerplate.
This means:
- You decide the approach and structure
- You break the problem into pieces
- The AI implements the pieces under your guidance
- You review everything before it goes in
My AI Pair Programming Workflow
Phase 1: Think Out Loud (5-10 minutes)
Before writing any code, I have a conversation with the AI about the problem:
This phase is about exploring the solution space. I’m using the AI as a sounding board — the way I’d use a human pair partner. Sometimes it surfaces tradeoffs I hadn’t considered. Sometimes it confirms what I was already thinking. Either way, it forces me to articulate my thinking, which is half the value of pairing.
Phase 2: Scaffold Together (10-15 minutes)
Once I’ve decided on an approach, I ask the AI to help with the structure:
This gives me a scaffold I can review and adjust before any logic is written. It’s like whiteboarding with a colleague — you agree on the shape before filling in the details.
Phase 3: Implement Incrementally (30-60 minutes)
Now I work through the implementation piece by piece:
After each piece, I review, modify if needed, and then move to the next:
This back-and-forth is where AI pair programming feels most like real pairing. I’m steering, the AI is typing, and we’re building the solution together.
Phase 4: Edge Cases and Testing (15-20 minutes)
AI is especially good at this phase because it’s pattern-matching against common edge cases it’s seen thousands of times. It consistently thinks of cases I’d miss — things like “what if the Redis key contains special characters” or “what if two requests arrive in the same millisecond.”
Phase 5: Review and Refine (10-15 minutes)
Finally, I paste the complete implementation back and ask for a review:
This is like asking your pair partner to switch roles — they’ve been navigating, now they’re driving a review pass. The fresh perspective (even from an AI) often catches issues you introduced during the iterative development.
Prompting Patterns That Work for Pairing
The “Don’t Jump Ahead” Pattern
Without this, AI tools tend to generate the entire solution at once, which defeats the purpose of incremental pairing.
The “Challenge My Assumptions” Pattern
This gets you the creative friction that makes human pairing valuable. The AI won’t always be right, but it’ll force you to defend your decisions — which either strengthens them or reveals weaknesses.
The “Teach Me” Pattern
When working in an unfamiliar area:
This is arguably AI’s biggest advantage over human pairing: it’s infinitely patient when teaching. A human pair partner might get frustrated explaining certificate chains for the third time. The AI never does.
When to Ditch the AI
AI pair programming has real limitations. I’ve learned to recognize when to stop and think for myself:
Complex business logic. The AI doesn’t know why your rate limit for premium users is 10x higher, or why certain endpoints are exempt. You have to encode that context with every prompt, and at some point the prompting overhead exceeds the value.
System design decisions. AI can list tradeoffs, but it can’t feel the weight of living with a bad architecture for two years. Your experience matters more here.
When you’re learning something new. Counterintuitive, but sometimes the struggle is the point. If I’m learning a new framework, having the AI write all the code means I learn nothing. I’ll use it to explain concepts but write the code myself.
When the AI is confidently wrong. This happens. The AI will write code that looks correct, passes your first review, and has a subtle bug. If you notice it starting to generate plausible-but-wrong code, step back and think independently.
The Productivity Impact
I tracked my output (loosely, not scientifically) over three months of AI pair programming vs. three months prior:
- Lines of code per day: Up maybe 30-40%. Not a huge jump, because I was already productive.
- Time to working prototype: Down significantly. What used to take a day often takes a few hours.
- Bug rate: About the same. AI-generated code introduces different bugs than I would write, but not more of them.
- Code I understand: This went down initially, until I fixed my workflow. Now it’s back to normal because I drive the architecture.
- Enjoyment: Honestly higher. The boring parts of coding (boilerplate, standard patterns) go faster, so I spend more time on the interesting parts.
The biggest productivity gain isn’t speed — it’s reduced context switching. When you’re stuck on a problem with a human pair partner, you have to explain the full context. With AI, the context is already in the conversation. No warmup, no “wait, let me read this function first.”
Tools for AI Pair Programming
The tool matters less than the workflow, but here’s what works for different styles:
- Cursor: Best for in-editor pairing where you want inline suggestions and quick edits. The Composer feature is great for multi-file changes.
- Claude Code: Best for terminal-based pairing with deep reasoning. My preferred tool for the “think out loud” and “challenge assumptions” phases.
- Copilot Chat: Decent for quick questions without leaving the editor. Not as good for extended pairing sessions.
I typically use Claude Code for phases 1-2 (thinking and scaffolding) and Cursor for phases 3-5 (implementation and review). Your mileage may vary.
The best pair programming partner is whoever makes you think more clearly. Sometimes that’s a human. Sometimes, surprisingly, it’s an AI.
You might also like
- GitHub Copilot vs Cursor vs Claude Code: Which Wins?
- Prompt Engineering for Developers: 7 Patterns That Actually Work
- Building an AI-Powered Development Workflow from Scratch
📦 Free: AI Code Review Prompt Pack — 10 prompts I use on 15+ PRs/week.
Newsletter: One practical AI workflow per week, plus templates I don’t publish here. Subscribe →