Prototype First, Requirements Second

The process we inherited I have been rethinking how we clarify requirements in software teams, because the way most of us still work feels optimized for a world that no longer exists. Traditionally, we treated requirements as something that had to be settled before implementation began. We wrote documents, scheduled planning sessions, debated edge cases in meetings, aligned on tickets and estimates, and only after everyone felt comfortable did we finally start building. That process made sense when writing software was expensive and slow. If a feature took weeks to implement, investing time up front to reduce mistakes was rational. ...

January 27, 2026 · 4 min · Michael

How I Built It: Grandfather Clock & Notion-Driven AI Development

Building a skeuomorphic clock with AI, Notion, and no copy-pasting.

January 25, 2026 · 1 min · Michael

If My Agent Isn’t in the Meeting, Are We Wasting Our Time?

A thought has been bothering me lately during meetings. Not about the agenda. Not about the discussion quality. Something feels different. My agent isn’t here. If my agent is the one who will eventually implement, analyze, draft, simulate, and execute most of the follow-up work, why are we making decisions without it in the room? And if the agent isn’t present, are we actually wasting part of the meeting? For years, meetings assumed a very specific workflow. Humans discuss. Humans decide. Humans go back and execute. The people in the room were the ones who would do the work later, so alignment made sense. The decision-makers and the executors were the same group. Today, a large portion of execution is delegated to agents. They write the code, analyze the logs, generate the docs, explore design options, and draft the first implementation. In many cases, they are faster and more thorough than we are. Which means something subtle has changed. The “doers” are no longer fully represented in the meeting. So what happens? We make decisions based on partial information. We debate feasibility without quickly validating it. We speculate about tradeoffs that an agent could prototype in minutes. We leave with action items that could have been pre-computed during the conversation. In short, we talk first and test later. But in an agentic workflow, that order is backwards. If agents compress execution to minutes or hours, then meetings should compress uncertainty, not create more of it. Instead of debating what might work, we should be asking the agent in real time. Instead of guessing complexity, we should generate a quick implementation. Instead of assigning follow-up analysis, we should produce it on the spot. The meeting becomes less about opinions and more about evidence. This doesn’t mean replacing humans. It means changing the role of humans. We should focus on framing the problem, defining constraints, and making judgment calls, while the agent handles exploration and validation live. In that world, an agent is not a tool you use after the meeting. It is a participant during the meeting. Almost like a quiet engineer sitting next to you, ready to answer “what if” instantly. So now I catch myself thinking in meetings: if my agent isn’t here, are we making decisions blind? Because increasingly, the highest leverage thing in the room isn’t another opinion. It’s faster feedback.

January 24, 2026 · 2 min · Michael

Quizzes, Not Diffs

As AI agents accelerate code generation, reviewing every diff no longer scales. The real bottleneck shifts from writing code to understanding its consequences. Instead of relying solely on explanations or tests, a simple technique like asking the agent to generate a short quiz forces the human to prove comprehension before approving changes. Tests verify correctness, explanations summarize intent, but quizzes verify understanding. In an agentic world, effective oversight is less about inspecting code and more about validating that we truly understand what the system will do.

January 22, 2026 · 4 min · Michael

A Practical Guide to Parallel AI Development with Multiple Agents

The idea of running many AI coding agents at the same time is powerful, but often misunderstood. When people claim they have ten or fifteen agents working in parallel, they rarely mean ten agents editing the same codebase simultaneously. What they usually mean is that they have many independent AI contexts working on different problems, and they orchestrate the results.

January 18, 2026 · 7 min · Michael

How I Built a Spotify and Google Speaker Controller with Claude

TL;DR: I got tired of yelling “Hey Google” at my smart speaker. So I paired with Claude Code to build command-line scripts that control Spotify on my Google Home. Network discovery → API setup → working scripts in one session. Now I control my house from the terminal like a proper developer. The Problem That Started It All Picture this: You’re deep in flow, coding with headphones on. You want to change the music on your Google Home speaker in the other room. Your options? ...

January 12, 2026 · 10 min · Michael