I write about AI tools that actually work, developer productivity, and engineering insights.
No hype. No listicles. Just stuff I’ve tested in production.
I write about AI tools that actually work, developer productivity, and engineering insights.
No hype. No listicles. Just stuff I’ve tested in production.
In February 2026, a prompt injection hidden in a GitHub issue title led to an npm supply chain compromise affecting 4,000 machines. A month before that, invisible HTML comments in issues caused Copilot to leak GITHUB_TOKEN values. My team had a near-miss of our own. Here’s the anatomy of these attacks, what the IDEsaster disclosures revealed about the entire AI IDE ecosystem, and the 4-layer defense model that actually makes a difference.
A Hacker News thread with 400+ comments captured what my senior engineer said quietly in our 1-on-1: AI coding agents are eating the creative, enjoyable parts of software engineering and leaving humans with the tedious review work. As a tech lead, I’m watching engagement scores drop while velocity metrics climb. Here’s what that tension actually looks like from the inside, and the three structural changes I’m trying to keep my team from checking out.
Google Stitch 2.0 just dropped with an AI-native infinite canvas, voice input, and code export. I spent a week testing it against our actual design-to-dev workflow. It’s shockingly good for mobile mockups — and frustratingly bad for complex dashboards. Here’s the breakdown.
Simon Willison coined ‘agentic engineering’ to describe professional software engineers using AI coding agents like Claude Code and Codex. I ran my team through two sprints of structured agentic workflows. The productivity numbers look great — until you check the bug tracker three weeks later.
I ran an informal A/B test across 100 PRs, routing each through both AI and human review. AI caught 23 bugs humans missed. It also produced hundreds of useless comments. Here’s what the data actually says.
Ad-hoc AI usage creates inconsistency and hidden risk. Here’s how to build a practical standard: AGENTS.md, .cursorrules, prompt libraries, and review checklists — with real templates.
AI can help you prep for 1:1s by surfacing action items, recent PRs, and discussion topics. But there’s a line you shouldn’t cross: never let AI write your actual feedback.
Automating standup notes with AI seemed like a small quality-of-life win. It turned out to surface real questions about surveillance, trust, and what standup is actually for.
Claude drafted a technical RFC in 20 minutes. My teammates said it ‘reads like a textbook.’ Here’s what the experiment taught us about where AI fits in technical writing.
After running MCP servers in my daily workflow for months, here’s what’s genuinely useful versus what sounded great in a demo and quietly disappeared from my setup.