We needed to propose a significant change to how we handle event ingestion. Normally an RFC like that takes me a day — sketching the problem, enumerating alternatives, writing out the trade-offs, getting to a recommendation. On a Tuesday afternoon with four other things in my queue, I decided to run an experiment: hand the whole thing to Claude and see what came out.

Twenty minutes later I had a seven-page draft. It was technically coherent, well-structured, and covered all the standard RFC sections. My lead engineer read it and said, “This reads like a textbook.”

That’s not a compliment.

Table of Contents


What I Gave Claude

I gave it:

  • A description of the current system architecture (about 400 words)
  • A summary of the problem we were trying to solve
  • Three approaches I’d already been considering
  • A constraint list (latency budget, infra we couldn’t change, team familiarity)

Reasonably complete context. More than I’d give a new engineer on the team and probably less than I’d give a senior one.

What Came Back

The structure was solid. Claude organized it correctly: background, problem statement, proposed solutions, comparison matrix, recommendation, risks. If you’d shown it to someone outside the company, they’d have thought it was fine.

The prose was clean. No grammar issues, no meandering paragraphs.

The comparison matrix was thorough — it hit latency, operational complexity, cost, and rollback difficulty for each option. All reasonable dimensions.

Where It Actually Fell Short

Organizational context was missing. One of the three approaches was technically superior on paper but required skills we don’t have on the current team. Claude had no way to know that. It recommended that approach.

The trade-off analysis was generic. “Option B has higher operational complexity” is true but useless to us. What mattered was that we’d had an on-call incident last quarter because of a similar architectural pattern. That context made Option B a non-starter regardless of the paper comparison. Claude didn’t know about the incident.

The risks section was textbook. It listed risks that would apply to any system in this category — latency variance, data loss, schema drift. All real risks. None of them the specific ones our setup actually faces. Reading it felt like reading the risk section of a vendor whitepaper.

No political awareness. RFC adoption isn’t just technical. One of the approaches was going to get pushback from a stakeholder because of a decision they’d made two years ago that this change would implicitly reverse. An RFC that doesn’t address that head-on doesn’t get approved. Claude had no idea.

The “Reads Like a Textbook” Problem

My engineer’s “reads like a textbook” critique was precise. The RFC read like a generalized case study, not a document about our actual system written by someone who’s been running it for two years.

That’s the fundamental limitation: an RFC is partly a technical document and partly a persuasion document. You’re making a case to specific people who have specific concerns, specific history, and specific blind spots. AI doesn’t know your audience. It knows the generic version of your audience.

Generic is fine for background sections. It’s actively harmful for trade-off analysis and recommendations, where the value of the document comes from specific, contextualized judgment.

The Downside of AI-Drafted RFCs

Beyond quality, there’s a process risk.

Writing an RFC forces you to think through the problem deeply. The act of writing the trade-off section is where you often discover that your preferred option has a flaw you hadn’t considered. If you outsource that section to AI, you skip the thinking. You might get a document that looks complete but represents shallower analysis than you’d have done yourself.

I noticed this in my own experiment. Because I had a draft quickly, I felt less compelled to stress-test my own reasoning. The document existed, it looked finished, there was implicit pressure to move forward with it rather than tear it apart.

There’s also an attribution problem. When someone asks “why did you weigh latency over cost here?” you need to own that answer. If Claude made that call and you rubber-stamped it, that’s a weaker position than if you reasoned through it yourself.

The Process That Actually Worked

After the initial “textbook” feedback, I rebuilt the RFC with a hybrid approach:

  1. AI drafts the background and structure — problem statement, current state, constraints. Fast and clean, no domain judgment required.

  2. Human writes the trade-offs — specifically the parts that require organizational memory, political awareness, and historical context. This section gets written by someone who’s actually been running the system.

  3. AI polishes — once the trade-off analysis and recommendation are written by a human, AI is useful for tightening prose, adding transition sentences, and checking that the document flows logically.

That process cut my total time from a full day to about three hours. The draft was better than the pure-AI version because the critical sections had real thinking behind them.

For more on AI-assisted documentation patterns, see writing technical docs with AI.

When to Use AI for RFCs (and When Not To)

Use AI for:

  • Greenfield proposals where you’re documenting a well-understood technical pattern
  • Getting a structural draft when you’re staring at a blank page
  • Editing and polishing once the core thinking is done
  • Generating comparison matrices (then verify every cell)

Don’t rely on AI for:

  • Trade-off sections in systems with significant organizational history
  • Recommendations that depend on team capability or past incidents
  • Risk analysis specific to your infrastructure
  • Anything that needs to land with a skeptical internal audience

Final Take

The experiment was worth running. Twenty minutes for a structural draft is genuinely useful. But an RFC isn’t a deliverable you can outsource — it’s a record of your team’s reasoning about a specific problem in a specific context.

The parts that matter most are the parts AI can’t write. The parts AI can write are the parts that anyone with general knowledge could write anyway.

Use it as a scaffolding tool. Don’t mistake the scaffold for the building.