Three months ago I thought I’d found the perfect low-stakes AI automation: summarize our daily standups. Ten minutes of meeting → two paragraphs of notes. Nobody has to take notes, everything gets captured, done.
By week three, two engineers had pulled me aside separately to say they hated it. By week five I’d rolled most of it back.
Here’s what happened.
Table of Contents
- The Setup
- The First Two Weeks (It Was Going Great)
- Where It Started Breaking Down
- The Misattribution Problem
- The Surveillance Problem
- The Missing Context Problem
- What I Rolled Back and What I Kept
- What Standup Is Actually For
The Setup
Standups are 15 minutes, async-first, we have a async format for when people can’t make it. The live version is a quick sync — blockers, notable updates, anything that needs cross-team coordination.
The automation was simple: record the meeting, transcribe it, run the transcript through a prompt that extracted: who said what, blockers mentioned, follow-up items. Post the summary to our channel.
I was pretty pleased with myself. The output looked clean. No more asking “wait what did we decide about X last Tuesday?”
The First Two Weeks (It Was Going Great)
The first two weeks were genuinely good. The summaries were accurate enough. People would reference them when they needed context. I stopped taking notes manually. The follow-up items list was actually useful.
I mentioned it in a team retrospective and everyone seemed positive. I was already thinking about what else I could automate.
This is when I should have been more suspicious of the consensus. When everyone agrees something is fine in a retro, sometimes it means it’s fine. Sometimes it means nobody wants to be the first person to complain.
Where It Started Breaking Down
Week three. A summary attributed a blocker to the wrong person — said “one engineer is blocked on the API integration” when it was actually a question about whether someone else’s work was blocked. The engineer noticed. She didn’t say anything publicly, but she sent me a message.
“The summary got that wrong. I wasn’t the blocked one.”
I said I’d fix it. I did. But then I started looking at previous summaries more carefully and found similar misattributions. Nothing malicious — just the model making reasonable guesses from unclear pronoun references and getting them wrong maybe 15% of the time.
In a meeting summary, a 15% error rate on attribution is not a small error. It’s a meaningful error.
The Misattribution Problem
Transcript: “So are you blocked on that or is it more like you’re waiting on Marcus?”
Summary: “Marcus is blocked on the integration work.”
The model heard “blocked” and “Marcus” in the same sentence and drew the natural conclusion. But the actual situation was the opposite — Marcus hadn’t responded to a question yet, and the person asking was wondering if they should follow up.
The difference matters. A wrong attribution in meeting notes can become someone’s informal performance record. If a few months of summaries consistently link your name to blockers or delayed work — even incorrectly — that’s a problem.
I hadn’t thought about this when I set it up. I was thinking about note-taking convenience, not about what the downstream consequences of wrong notes could be.
The Surveillance Problem
This was the bigger issue and it took me longer to see it.
One engineer told me directly: “I don’t love that everything I say is being recorded and processed.” Another said it slightly differently: “Standup used to feel casual. Now it feels like it’s on the record.”
Both of them are right, and I hadn’t adequately thought through what it would feel like from their side.
There’s a difference between “the team takes notes sometimes” and “an automated system is always capturing everything.” The latter changes the register of the conversation. People edit themselves. They phrase things more carefully. The quick “I’m stuck, this is annoying” becomes “I’m working through a challenge with the pipeline.” Standup becomes a little more formal, a little less useful.
Standup at its best is a space to surface problems early, including half-formed problems you’re not sure are problems yet. You need psychological safety for that. An always-on summarization system, regardless of intent, is a mild threat to that safety.
I should have asked the team before implementing it. That’s the obvious lesson I didn’t apply.
The Missing Context Problem
The third issue was more technical. Meeting summaries without context are often misleading.
“Team discussed concerns about the Q2 timeline” — but the transcript doesn’t capture that this was a three-minute conversation that ended with everyone feeling fine, not a forty-minute debate. The summary reads more alarming than the meeting was.
“Follow-up: review deployment approach” — but this was offhand, it wasn’t actually assigned to anyone, and nobody was planning to do it. The AI captured it as an action item because it was phrased like one, but the humans in the room understood from tone and context that it wasn’t.
AI has no access to the subtext. The eye-rolls, the laughs, the “yeah I know, we’ll figure it out” that signals something is not actually a live issue — none of that survives transcription.
What I Rolled Back and What I Kept
I turned off the automated posting. The summaries no longer go into the channel automatically.
What I kept: I still run the transcription, and I still run the summary prompt. But I read it myself before anything gets shared, and I only post it if it’s substantially accurate and adds something. Two or three times a week I post a manual version based on my own notes. The AI summary is a draft for me, not a product.
I also told the team explicitly: recording for my own reference only, not being stored or used for evaluation. And I asked whether people wanted to turn off recording entirely — half the team said yes, so now recording is optional per meeting.
This is roughly the right model for AI automation on anything that involves people: the human stays in the loop as a filter, not just as an approver of the final output.
What Standup Is Actually For
Standup isn’t primarily about information transfer. It’s about team coherence — a daily moment where people feel connected and willing to surface problems early. Optimizing for information capture optimizes for the wrong thing.
If you’re thinking about automating standup notes: try it, but ask your team first, keep a human in the loop, and be ready for the feedback that it changes the feel of the meeting. More on the automation tradeoffs in MCP Servers Daily Use.