đź‘‹ Welcome to Michael Guo’s Blog

Let’s learn and build together! Check out my Portfolio.

From Tool to Teammate: Navigating the Four Tensions of Agentic AI

Gartner says 40% of enterprise apps will embed AI agents by end of 2026. But here’s the thing nobody’s talking about: agents aren’t just better tools—they’re a new category that breaks how we organize work. The MIT/BCG 2025 research reveals four tensions every leader will face, and most aren’t ready.

February 13, 2026 Â· 5 min Â· Michael

Designing Enterprise Data Agents: From Pipelines to Agent-Native Architecture

Natural language interfaces for data are no longer experimental—they’re becoming essential enterprise tools. But building a reliable, production-grade data agent requires moving beyond simple prompt engineering. This post shares architectural lessons learned from building an enterprise natural language to SQL (NL2SQL) agent. The Problem with Pipeline Architectures Most NL2SQL systems start with a pipeline approach: User Question → Intent Detection → RAG Retrieval → SQL Generation → Validation → Response This works for demos but creates problems at scale: ...

February 3, 2026 Â· 6 min Â· Michael

The Bitter Lesson Meets Reality: Lessons from Building Production Agents

A response to the observation that “coding agents are general agents” and that program synthesis will outperform hand-crafted vertical-specific agents. The Post The Thesis The tweet makes a compelling point: coding agents that write and execute code represent a form of scalable search. Rather than encoding years of expert knowledge into prompts and rules, let the agent explore the solution space through code generation and execution. This is “bitter lesson adjacent”—general methods that leverage computation outperform hand-crafted approaches. ...

January 29, 2026 Â· 6 min Â· Michael

Prototype First, Requirements Second

The process we inherited I have been rethinking how we clarify requirements in software teams, because the way most of us still work feels optimized for a world that no longer exists. Traditionally, we treated requirements as something that had to be settled before implementation began. We wrote documents, scheduled planning sessions, debated edge cases in meetings, aligned on tickets and estimates, and only after everyone felt comfortable did we finally start building. That process made sense when writing software was expensive and slow. If a feature took weeks to implement, investing time up front to reduce mistakes was rational. ...

January 27, 2026 Â· 4 min Â· Michael

How I Built It: Grandfather Clock & Notion-Driven AI Development

Building a skeuomorphic clock with AI, Notion, and no copy-pasting.

January 25, 2026 Â· 1 min Â· Michael

How I Gave My AI a Soul: Upgrading Clawdbot for System Thinking

A retrospective on upgrading Clawdbot from a reactive task-doer to a proactive, system-thinking partner.

January 25, 2026 Â· 1 min Â· Michael

If My Agent Isn’t in the Meeting, Are We Wasting Our Time?

A thought has been bothering me lately during meetings. Not about the agenda. Not about the discussion quality. Something feels different. My agent isn’t here. If my agent is the one who will eventually implement, analyze, draft, simulate, and execute most of the follow-up work, why are we making decisions without it in the room? And if the agent isn’t present, are we actually wasting part of the meeting? For years, meetings assumed a very specific workflow. Humans discuss. Humans decide. Humans go back and execute. The people in the room were the ones who would do the work later, so alignment made sense. The decision-makers and the executors were the same group. Today, a large portion of execution is delegated to agents. They write the code, analyze the logs, generate the docs, explore design options, and draft the first implementation. In many cases, they are faster and more thorough than we are. Which means something subtle has changed. The “doers” are no longer fully represented in the meeting. So what happens? We make decisions based on partial information. We debate feasibility without quickly validating it. We speculate about tradeoffs that an agent could prototype in minutes. We leave with action items that could have been pre-computed during the conversation. In short, we talk first and test later. But in an agentic workflow, that order is backwards. If agents compress execution to minutes or hours, then meetings should compress uncertainty, not create more of it. Instead of debating what might work, we should be asking the agent in real time. Instead of guessing complexity, we should generate a quick implementation. Instead of assigning follow-up analysis, we should produce it on the spot. The meeting becomes less about opinions and more about evidence. This doesn’t mean replacing humans. It means changing the role of humans. We should focus on framing the problem, defining constraints, and making judgment calls, while the agent handles exploration and validation live. In that world, an agent is not a tool you use after the meeting. It is a participant during the meeting. Almost like a quiet engineer sitting next to you, ready to answer “what if” instantly. So now I catch myself thinking in meetings: if my agent isn’t here, are we making decisions blind? Because increasingly, the highest leverage thing in the room isn’t another opinion. It’s faster feedback.

January 24, 2026 Â· 2 min Â· Michael

Quizzes, Not Diffs

As AI agents accelerate code generation, reviewing every diff no longer scales. The real bottleneck shifts from writing code to understanding its consequences. Instead of relying solely on explanations or tests, a simple technique like asking the agent to generate a short quiz forces the human to prove comprehension before approving changes. Tests verify correctness, explanations summarize intent, but quizzes verify understanding. In an agentic world, effective oversight is less about inspecting code and more about validating that we truly understand what the system will do.

January 22, 2026 Â· 4 min Â· Michael

Why Do We Still Use Sprints in Software Development?

Sprints were originally designed to manage slow, uncertain execution in software development. As AI agents compress coding and iteration cycles from weeks to hours or days, execution is no longer the primary bottleneck for many teams. Keeping two-week sprints unchanged often shifts time into waiting, over-polishing, or coordination overhead. Rather than abandoning sprints, teams need to redefine their purpose. In an agentic environment, sprints should function as a cadence for decision-making, alignment, and risk management, while execution flows continuously. The real question is not whether sprints still matter, but what problem they are meant to solve now.

January 21, 2026 Â· 3 min Â· Michael

A Practical Guide to Parallel AI Development with Multiple Agents

The idea of running many AI coding agents at the same time is powerful, but often misunderstood. When people claim they have ten or fifteen agents working in parallel, they rarely mean ten agents editing the same codebase simultaneously. What they usually mean is that they have many independent AI contexts working on different problems, and they orchestrate the results.

January 18, 2026 Â· 7 min Â· Michael