When people talk about AI for engineers, they mean coding tools. But if you’re an engineering manager or tech lead, coding is maybe 40% of your job. The other 60% is meetings, planning, documentation, and communication.

That other 60% is where AI saves me the most time.

1. Preparing for 1:1 Meetings

I have 7 direct reports. That’s 7 weekly 1:1s, and for a while I was walking into at least half of them having skimmed their recent PRs five minutes beforehand. Not great.

Now, before each 1:1 I feed a prompt everything I can gather (using the prompt patterns I’ve refined over time) — their merged PRs, completed tickets, last meeting’s notes, even their recent Slack activity in team channels:

H----G1234ee....rPTLTneRiaheTTOA'scserhwnnsktiaroeymertewet1eeqcfhrs:r:ualag1eserateccpsegdoneetesh:moncirapttio(p[lefndbplesSiseleit:lcvonseaaecetd[ctblkd:pkhooefaiuprtr[smntmsholtege,imiesstnssshtfGtpatersirgoituptfeerosrHrvsrptiuoiecirnbmoicucat]unortJsgrtifitneooorneinnraoazte]tmexs[ewpipsc(olge]hbronraekrasnelongsnee,]ln:suw:ionr[ekk,leoynaodttocpgoiencncsee]rrincs))

The output isn’t a script — it’s a cheat sheet. I scan it, cross out anything that feels off, and walk in with actual talking points instead of “so… how’s it going?” I’ve gotten positive feedback that the 1:1s feel more prepared and focused.

2. Writing Technical RFCs

What used to take most of a day now takes a focused hour or so. I feed AI the context — the problem, my proposed solution, alternatives I’ve ruled out — and it generates an 80% complete RFC draft in minutes. I spend the remaining time adding real data, correcting technical details, and layering in the nuance that only I know.

The problem that led me here: RFC writing is important but slow. A good RFC takes 4-8 hours. When you’re busy, they get deprioritized, and decisions get made in Slack threads instead.

What the prompt looks like:

IPCPA--G1234567Wrurle.......rnorotSSniebrpecwePPAMRSOteleorairrrliiupedensnltaootgscemteaictbperkcnit:dtnhelorasetoaigiesntsQOrsvnameaiasufwucoehgndtoneorrhlsoSindMsriiurtRtSvettettIioFaoePMtiaevei'zCtlslironecovoKeuatinantnenafmtCnicsatu:tfoeiogsutrcaklnon(adepeAolaltnspticr:dnloihiehodsy(w((daonnc[iti2tesnciebad(on-eresecsreeog3cedasidrxhdolieeepctpnafnfadeohai(pRgd:nmircppsFdspsaarreCpelilgloon.isevesrsaipctextad/coHert)rpechreliefuhto)riprocsaneentrt,isn'eiqulgsouori,tidneuenantr]ur:cibehoelnleepnucerswedlscieeuf.o~tddon0hsderBt.)aee1atdex%uaiaptt)acr:ooghefmr)caaietmsvieecd,netrsnsecotrtdriuypvrteiirnobgno)spee.akload

The biggest win isn’t speed — it’s that RFCs actually get written now instead of languishing as “I’ll get to that” tasks.

3. Sprint Planning and Estimation

Last quarter we had a ticket called “Migrate user preferences to new schema.” Looked like a medium. We pulled it into the sprint without much discussion. Mid-sprint, someone realized we had no backfill strategy for existing users — the migration would silently drop preferences for anyone who didn’t log in during the rollout window. Cue the scramble, cue the awkward conversation with product about slipping timelines.

That’s the kind of thing I now catch before the sprint starts. I paste the upcoming tickets into a prompt and ask it to play devil’s advocate:

H[[[[F12345eTTTTo.....riiiireccccELFISkkkkesilduaeeeeatsaegrttttcitgngehmte1234ahaist]]]]ttimfth::::iedbyecdihAMFAkcegroJdiideonuiwidgxdtmosrr:pduktapaiAlessoatn/eptgetBxertbiieinehrcnurttdqaekasmeyeutaeteisniktirtt(crcsotSieoXnpef/emuLfrnrMseloteta/ndtrofmLtietersccnuriwXakesemoLptuexener)rhsttrcokeaeseuwrtslstfietipiotqnhnrstirhueetitonieonorsdetAnpneiss:PeabatctmIwyoseliamaosamlsernralcndiitehtinferengimsgctoaeaonrftevlibsioolcwnoewup

It won’t catch everything, and the estimates are rough, but it consistently surfaces the “wait, did anyone think about…” questions that used to ambush us mid-sprint. The schema migration backfill? It flagged exactly that.

4. Translating Between Technical and Non-Technical

Here’s a sentence from a real status update I wrote: “We hit a blocker with pgbouncer transaction mode not supporting prepared statements, which required refactoring 12 endpoints.” Perfectly clear to my team. Completely meaningless to my VP. And rewriting it as “a database compatibility issue added two days of unplanned work” felt like I was losing important detail.

I used to write three versions of every significant update — one for the team, one for product, one for leadership. Now I write the technical version once and let AI handle the translation:

H"R123eWe...rewerPVS'cirPpsotormedoiapufnltcttehtEetinrcesmgehdaitnfnnritoaeochrgeae:erslrildnisa(gdttfeaao(tbcf(uauofssscoe:ucusupmt:sdii:agmrtreiwealshtikafin,tooernpwireotmonuopgtrarPcewtotseesslata,lmgn,:rdaenwSfdhQeaLartte1usd6roi,eudrnwic'hmetip,clnhielceeradsetssqi)ouoninsrse)ldeaurpndeadt)ing47queriesthatuseddeprecatedsyntax.Thenewconnectionpoolingreducedp99latencyfrom340msto180ms.Wehitablockerwiththepgbouncertransactionmodenotsupportingpreparedstatements,whichrequiredrefactoring12endpoints.We'renowat94%testcoverageonthemigratedcode."

I still review and tweak each version — the PM translation sometimes buries the timeline impact, and the exec version can be too rosy — but the starting drafts are solid. What used to eat 20 minutes per update now takes about two.

5. Post-Incident Analysis

A few months ago we had a 15-minute outage — a feature flag deployment went to 100% of users instead of a 5% canary, and users without a payment method started getting 500 errors. The incident response itself was fine: alert at 14:23, rollback by 14:38, all clear by 15:00. But by the time the dust settled, everyone was tired and behind on their sprint work. The post-mortem sat as a half-finished doc for a week.

That’s the pattern: the incident gets handled, but the write-up doesn’t. Now I dump my raw timeline notes and root cause into a prompt right after the incident, while it’s still fresh:

WT-------RIS1234567riomt.......im1111111oprte4444445tauSITRWWAel:::::::ccumiohhci2233340cttmpmoaatan3513850a:umaetttieuraclop:AOIRREAs1erticwwnolndoorle5ynaeese-ellrl:t(eunnitrcnllomh(qsttt-tatbbrcCie3ueemliaalonawwmoflfccrenupsnaersriikkaaftoetnlotreetriesnialn(eendicegsttflgwmdg:no-eiyi:iimbcomnestdnntpahfocdihoAeeilcare)scPewaeknetsouIrttgle)(wmdeeteem5neeaed,ovenrcpiawWrtrklenntihsonoroetyfroyrrddhsarwmome:)norlerapedmaensllrtdtordtegryouheremree>dosess5lon,de%lltaev~trdie2eann,swoga3ub0-ntl0oeltadueetsasaev:1re4fs:eo1arw5tenucerereisvfebldlaag5n0kf0ofreorrarlomlresu,tsoe~r4fs5iliflna)sitleeaddpoafym5e%ntcsa.nary.Thefeaturehadanunhandlededgecaseforuserswithnopaymentmethod.

The 5 Whys section is where this pays off the most. AI is relentless about asking “why” in ways that surface systemic issues — in this case, it traced the flag misconfiguration back to our deployment pipeline lacking a canary-percentage validation step, which was a more useful action item than “be more careful with feature flags.”


The Common Thread

None of these use cases involve AI writing code. They’re all about accelerating the communication and decision-making parts of engineering leadership.

The ROI is huge because these tasks:

  • Are high-value but time-consuming (like code review, which I covered separately)
  • Follow predictable patterns (perfect for AI)
  • Don’t require AI to be perfect (you’re reviewing and editing)
  • Free up time for the work that actually needs a human — building relationships, making judgment calls, and setting direction

If you’re a tech lead spending more than 30% of your time on documentation and communication, you’re leaving hours on the table by not using AI for these workflows.


You might also like


📦 Free: AI Code Review Prompt Pack — 10 prompts I use on 15+ PRs/week.

Newsletter: One practical AI workflow per week, plus templates I don’t publish here. Subscribe →