Last Tuesday, five minutes before a 1:1, I realized I had no idea what we talked about two weeks ago. The notes were buried somewhere, the action items were vague, and I was about to walk into a conversation with nothing useful to say.
I used to just wing it. Then I started using AI to prep. It changed the quality of those meetings — but it also surfaced a real question about where the line is.
Table of Contents
- What I Actually Do
- Step 1: Summarize Previous Action Items
- Step 2: Review Recent PRs and Commits
- Step 3: Generate Discussion Topics
- Where I Draw the Line
- The Creepy Version (Don’t Do This)
- What This Actually Changes
What I Actually Do
My prep workflow takes about 10 minutes now, down from 25 when I did it manually (or 0, when I skipped it entirely, which happened more than I’d like to admit).
The workflow:
- Dump last week’s notes into a prompt, get a clean summary of outstanding items
- Pull recent PR activity and ask AI to surface anything worth discussing
- Generate a list of potential discussion topics based on what I know about where that person is at
None of this is magic. But it’s the difference between showing up prepared and showing up hoping the other person carries the conversation.
Step 1: Summarize Previous Action Items
My 1:1 notes are messy. Half sentences. Abbreviations. Things like “follow up re: K’s concern about deployment pipeline — check with DevOps.” Perfectly legible to me in the moment, useless two weeks later.
I paste the raw notes into Claude and ask: “What were the open action items from this conversation? What was I supposed to follow up on?”
The output is a clean list. Takes 20 seconds. Then I check which ones I actually did and which ones I forgot about. Usually it’s a 60/40 split, which is uncomfortable but useful to know.
Prompt I use:
Step 2: Review Recent PRs and Commits
Before a 1:1, I look at what someone’s been shipping. Not to play gotcha — to have something real to talk about. “How’s the refactor going?” is a worse question than “I saw you split the auth module last week, how did that go?”
The problem: reading PRs takes time, and I have multiple 1:1s per week.
What I do instead: I pull the recent PR list (titles, descriptions, review thread summaries) and ask AI to flag anything interesting. Not “good” or “bad” — just notable. Large PRs, reversals, PRs that took a lot of review cycles, PRs where someone left detailed comments.
Then I read those specific ones myself. The AI narrows my attention, I do the actual reading.
This connects to something I wrote about in AI Code Review — AI is useful for triage, not judgment.
Step 3: Generate Discussion Topics
This one feels weird at first but it’s actually fine. I give the AI context about the person: what they’re working on, what I know about their career goals (from previous notes), any recent tensions or wins, and ask it to suggest discussion topics.
It’ll generate things like:
- “Check in on how the migration is going, given the deadline pressure you mentioned last time”
- “They mentioned wanting to do more system design work — is there an opportunity coming up?”
- “Their PR review turnaround has been slow — worth asking if they’re overloaded or blocked”
I don’t use all of them. Maybe three out of eight are actually relevant. But three good discussion topics is enough for a 30-minute 1:1, and generating them myself would take longer and I’d probably miss something.
Where I Draw the Line
Here’s the part that matters: I never let AI write my feedback.
Not the framing, not the phrasing, not the softening. If I need to tell someone their code quality has been slipping, I write that myself. If someone did something exceptional, I write that myself.
Why? Because feedback is a relationship act, not an information transfer. When someone can tell you’ve thought carefully about what to say to them specifically, it lands differently than boilerplate dressed up in their name.
AI is good at summarizing what happened. It’s not good at knowing why something matters to this specific person in this specific moment. That’s your job as a manager.
There’s a version of this tool that writes your performance reviews, drafts your feedback, and basically turns you into a coordinator of AI outputs. That version is worse at its job than a manager who does the thinking themselves.
The Creepy Version (Don’t Do This)
Some people take this further. They feed AI everything: Slack messages, email threads, commit logs, even mood indicators from standup tone.
I get why it’s tempting. More data = better picture, right?
But there’s a meaningful difference between reviewing someone’s public PRs and mining their communication patterns for “engagement signals.” One is doing your job. The other is building a surveillance dossier that would make your engineers deeply uncomfortable if they knew about it.
The test I use: would I be comfortable telling this person exactly what data I used and how? If the answer is no, I don’t use it.
Your 1:1s should feel like a safe space. The moment your engineer suspects you’re running sentiment analysis on their Slack messages, that trust is gone.
What This Actually Changes
Practically: my 1:1s are more useful. I show up with actual context, actual questions, and I follow up on things I said I would.
Philosophically: it forces me to be honest about when I wasn’t prepared before. The AI prep isn’t magic — it just makes skipping the prep visibly bad instead of invisibly bad.
If you want to dig deeper into how AI fits into the engineering management workflow, I also wrote about AI sprint planning and AI-generated test costs — both touch on similar themes of where automation helps and where it just shifts the work.
The short answer on 1:1 prep: use AI for the logistics, own the relationship yourself.