Nobody becomes a tech lead because they love writing documents. But documents are how decisions get made, how knowledge gets preserved, and how you avoid having the same argument three times.

I used to spend 3-4 hours on a good RFC. Now it takes me about an hour, and the quality is arguably better because I’m spending my time on the thinking instead of the formatting and prose.

Here’s how I use AI to write RFCs, ADRs, runbooks, and other technical documents — with actual templates and prompts I use every week.

Why AI Works Well for Technical Writing

Technical documents have a lot of structure. They follow templates. They have predictable sections. They require clear, direct prose rather than creative writing.

This is exactly what AI is good at. It can:

  • Generate well-structured first drafts from bullet points
  • Fill in standard sections (alternatives considered, risks, rollback plan)
  • Maintain consistent tone across a long document
  • Catch gaps in your reasoning (“you mentioned latency requirements but didn’t specify a target”)

What it can’t do: make the actual technical decisions. You still need to know what the right approach is. AI just helps you articulate it faster.

RFC (Request for Comments)

This is the document I write most often. Our team writes an RFC for any change that affects more than one service or team.

My RFC Prompt

W######O#WI#T----#A----#H#T#TH[r#####n#hn#h#t#o#a#hepiReyceADADBPCWwbiratFSADRSMlDraPaAlrrohMRlOneseCtuaeupowuetctItleionyiwiepgt:attvmatedtehaataessgesesaeatheimrieacicbesfwrkfnrn[uo:eaavnihtfharteagsoweytsrwrgaeslnelasndterQeoRi::[eyrtepeiconea2editamumuFttraidedctwgtsionaehyrClD[ospocauesiacdnfdtsaerld:hntiDlrscvlrnr:tvnbb]aeahfeeheti'PoMieouafay[eiisa(eseptlmiRontlstv]lxscipcwmrtatin'eleeepgphiaCnicnhisstsedalcmnratoaohegk:tbvaheonhcntnoradoleiataghsioet|epnannrceeaivsiconbigihsxndeetoLiitklnec.agesonidnh]ag.smer.isketenIpsettedsskwWinldFhl]e]hhfceoeiyaalsrrhenttau)eotovdeo.twpaeawdeeai:cis'ilht|.rna:heboIUplumspoetperi.aonbctptrthoeisia|sitkniMsgsnitogtraliunvtgcdehatsituw.nirhgoeysn:..

Real Example

Last month I needed an RFC for migrating our event storage from PostgreSQL to a time-series database. My notes were rough:

-------ptqcnctoauoeaesbenenatlrsd'mgeiitredtieiseohsssramimvfc2onaeaa.sgimn3tndi'TlTtoltByiawiminakstentrepisieamcbmwpneaaein-lctuirekdhpnaDwugnBarSwgriQi1e(dnLt4pg,hbocmasomleostmievnegpgsetdrarsnhetats(sitslbiowoaeiorfsxlnwittiitdettea2nyhst4sahiwI(,oin~ntf4l)hl7a,uKsex/tIxQsniLe7fscdlto)uirnxnoDgCwBl,,qiucgoekrrrHoiowCeuilssniegc)kSHQoLusdeialect

I fed this into Claude with my RFC template prompt. The first draft came back in about 45 seconds and was maybe 70% done. The structure was right, the alternatives section was solid (it correctly identified the tradeoffs between TimescaleDB’s PostgreSQL compatibility vs. ClickHouse’s raw performance), and the migration plan was reasonable.

What I had to fix:

  • The performance numbers were made up (AI doesn’t know our actual throughput). I replaced them with real measurements.
  • The “Detailed Design” section was too generic — it described TimescaleDB concepts rather than how we’d specifically integrate it with our pipeline.
  • It missed a critical risk: our downstream consumers expect PostgreSQL wire protocol, and ClickHouse’s compatibility layer has known limitations.

Total time: about 55 minutes, including a solid draft I could send for review. Previously this would have taken me 3+ hours.

ADR (Architecture Decision Record)

ADRs are shorter than RFCs — they document a single decision and its context. We write these for decisions that might be questioned in 6 months (“why did we use Redis instead of Memcached?”).

My ADR Prompt

WF##A#W#W#WB#BKM[ro#c#h#h#he#reypirAcaaaieatmDSeCtDtCthAepnseaRtpoeoolfott-atnscdnbntitea:[tetiiiseeeltennudetsdecsrisnusxuiqotnsu:oAm|taowumatntDbtneeeatdeRePinsbioesrrodcovfr](]oneeeueA:pcsatsw5roaish0c[srditRa0hTeeeehetiidrejwttwt?eeoel|eodclrceWotsdt]Didhweesuenoand.rp??tswerieAeWbdDDcheecReaacsosctto.niemsssdfeihiosdoo|reunchrlSeaedRusrdepdbceaeaeorrrnrse?dqdeu)dawiethcbdykapsblwteyaeodyA?droDienRda-ndtX'.hXteXspeicnkotiets..

The key instruction is “Be honest about the downsides.” Without it, AI writes ADRs that read like marketing — all upside, no tradeoffs. Real ADRs need to acknowledge what you’re giving up.

Example ADR Output (edited)

# ADR-047: Use Redis Streams for Inter-Service Event Bus

## Status
Accepted

## Context
Our services currently communicate via HTTP callbacks for async events.
This creates tight coupling, retry complexity, and makes it hard to add
new consumers. We need a lightweight event bus that supports:
- Multiple consumers per event
- At-least-once delivery
- Consumer group semantics (parallel processing)
- Low operational overhead (small team, can't run Kafka)

## Decision
Use Redis Streams as our inter-service event bus.

## Consequences
**Easier:**
- Adding new event consumers (just join the consumer group)
- Retry handling (built-in pending entry list)
- Debugging (we can inspect the stream directly)
- Operations (we already run Redis)

**Harder:**
- Durability guarantees are weaker than Kafka (Redis persistence has tradeoffs)
- No built-in schema registry (we'll need to manage event schemas ourselves)
- Stream memory grows until trimmed (need to set up MAXLEN or MINID)
- Team needs to learn Streams API (most only know basic Redis)

## Alternatives Rejected
- **Kafka:** Too much operational overhead for a 6-person team. Would need dedicated infra.
- **RabbitMQ:** Good fit, but adds another stateful service. Redis is already in our stack.
- **AWS SNS/SQS:** Vendor lock-in concern + adds latency vs. in-network Redis.

That took about 3 minutes of prompting and 5 minutes of editing. Writing it from scratch would take 20-30 minutes.

Runbooks

Runbooks are where AI really shines because they’re highly structured and follow predictable patterns. The hard part of writing a runbook isn’t the writing — it’s remembering all the steps and edge cases.

My Runbook Prompt

WF##H#S#N123#W#VC[ro***#o#t#u...#h#eopirRLOSwemernatmuaweSQpRbWWWEnPitseansnvydu-eehhhsofettbteemoibsraaactsixea:orrpcyoetttaotctnou:ityk-ldl-arkp*toosutttaeRtaeo:d*ymuDttsoootseiblpa:sieiticsooee[t[*kapoededoaonuvrTet*ngnpoxonlltaaideoncspausntt:aPwooS.(eitttotil*m1smticfeieuoe*]/timeFnt,oprdn]Phsapocinsea[2insrl(twstld/sduehtyaaPsedxdoosirt3iaepotluestceetcesn]ohecsoom]bhxtnn:oacsae'cfopotcdtoikpnetnrefpowtmfni:cuoaoirotrctrnmmpkth:gmu.e?tat[hn)fs(edicasxeli)neswarsorturiseko,.e]ddI.anschlbuodaerdasc,tuuaslercormempaonrdtss)torun.

Example: Database Connection Pool Exhaustion

I fed in details about our setup (PostgreSQL, pgbouncer, Node.js services) and got a runbook that included:

# Quick Diagnosis

## 1. Check current connections
psql -h pgbouncer-host -p 6432 pgbouncer -c "SHOW POOLS;"
# Expected: active connections near pool_size

## 2. Check for long-running queries
psql -h primary-db -c "
SELECT pid, now() - pg_stat_activity.query_start AS duration,
       query, state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '30 seconds'
ORDER BY duration DESC;"
# Expected: 0-2 long queries. If >5, this is likely the cause.

## 3. Check application-side pool metrics
curl -s http://service:9090/metrics | grep 'db_pool'
# Expected: db_pool_waiting should be 0. If >0, pool is exhausted.

The runbook AI generated was about 85% usable. I had to add our specific hostnames, adjust some commands for our pgbouncer configuration, and add a step we’d learned from experience (checking if a recent deployment introduced a connection leak). But the structure and flow were solid.

The Meta-Technique: Documents About Documents

One trick I’ve found unexpectedly useful: using AI to improve your document templates themselves.

HC---Ieomrm"""pemTMRr'ohiiosnegsvrkeofaasuelttretishdeoeeRbrncFanttCcapiektlomtianpefvnlmresdaposotlmneeasestrevnteece'o.vtrtiiaWeoaddewndid'edsrvrirteesseis:snsbasgelutewrihnaosiylhsuslsbbfitaeenoctegokwd"ebiseathncakflp.olrroeKw-e8"leapmuontnchthehsas.nadmeposvte-rlaalulncshtrruicstkusr"e.

This has improved our templates more than any retrospective discussion about document quality.

What Doesn’t Work

Letting AI write the whole thing without your notes. If you prompt “Write an RFC for migrating to microservices” without context, you get a generic document that could apply to any company. It’s worse than useless because it looks complete but contains no actual thinking.

Skipping the review. AI-generated technical documents have a distinctive failure mode: they’re internally consistent but may not match your reality. The architecture diagram makes sense, but it’s not your architecture. The migration plan is reasonable, but it ignores your constraint that deployments only happen on Tuesdays. Always review against your actual system.

Using AI for the decision itself. I’ve seen people ask “Should we use Kafka or Redis Streams?” and then just go with whatever the AI says. The AI doesn’t know your team’s experience, your ops capacity, your latency requirements, or your budget constraints. Use it to articulate decisions you’ve already made (or are close to making), not to make decisions for you.

My Typical Week

To give you a sense of volume: in a typical week I produce about 2 RFCs, 1-2 ADRs, maybe a runbook update, and various meeting notes and summaries. AI assistance probably saves me 4-6 hours per week on documentation alone. That’s real time I can spend on code review, architecture work, or (let’s be honest) leaving work at a reasonable hour.

The quality improvement matters as much as the speed. My documents are more consistent in structure, more thorough in covering alternatives and risks, and easier for the team to review because they follow predictable formats.


You might also like


📦 Free: AI Code Review Prompt Pack — 10 prompts I use on 15+ PRs/week.

Newsletter: One practical AI workflow per week, plus templates I don’t publish here. Subscribe →