I used to think about AI agents as really smart macros. You know—automate the boring stuff, execute the playbook faster, reduce headcount on repetitive tasks. That was 2024 thinking.

Then I started actually working with them. Not just prompting ChatGPT, but giving an agent a goal and watching it plan, execute, fail, retry, and finish. And something clicked: this isn’t automation. This is delegation.

76% of executives in the MIT/BCG 2025 research now view agentic AI as more like a coworker than a tool. That shift—from tool to teammate—isn’t semantic. It’s structural.


The Four Tensions Every Agentic Organization Faces

The MIT/BCG research identifies four specific tensions that emerge when you treat agents as teammates rather than tools. I have started hitting these myself, and I bet you will too:

1. Scalability vs. Adaptability

The tension: Tools scale predictably. Workers adapt dynamically. Agents do both—which breaks your organizational design.

Think about it. Your deployment pipeline is probably optimized for either:

  • Tool thinking: Standardize, lock down, monitor for drift
  • Worker thinking: Hire for judgment, give autonomy, manage by outcomes

Agents need both. They scale like tools (spin up 50 instances instantly) but learn and adapt like workers (each interaction changes their behavior). The organizations winning with agents aren’t choosing—they’re building hybrid frameworks that accept this duality.

My take: The companies that force agents into existing “tool” or “worker” categories will extract 20% of the value. The ones that build new playbooks for dual-nature systems will capture the rest.

2. Experience vs. Expediency

The tension: Do you invest in agent capabilities now, or ship quick wins and retrofit later?

This is the classic build-vs-buy, but accelerated. Agents get smarter with use. But only if you invest in the feedback loops, the memory systems, the context architectures that let them learn.

Quick wins with agents often look like: connect to API, automate single task, move on. But you’re not building compound returns—you’re building technical debt. The agent that learns from every customer interaction this quarter becomes unbeatable next quarter. The stateless one stays a commodity.

My take: The “just automate the workflow” projects are table stakes. The strategic differentiator is building agents that accumulate institutional knowledge faster than your competitors.

3. Supervision vs. Autonomy

The tension: How do you supervise something designed to work autonomously?

Traditional oversight assumes either full human control OR complete automation. Agents occupy the messy middle—they need human judgment at key decision points, but constant supervision defeats the purpose.

I see teams defaulting to two bad patterns:

  • Over-supervision: Review every agent action, making it slower than doing it manually
  • Under-supervision: Give agents broad scope, then panic when they make weird choices

The answer, per the research, isn’t more dashboards. It’s designing decision rights upfront. Which decisions can the agent make solo? Which need human input? What’s the escalation path? These aren’t monitoring questions—they’re governance questions.

My take: The organizations that document their “agent constitution” (what it can/cannot decide) will move faster than ones treating this as an afterthought.

4. Retrofit vs. Reengineer

The tension: How much should you change your existing processes?

This is where most agent projects die. You bolt an agent onto a legacy workflow, it creates friction, everyone blames the agent, and you roll back to the old way.

Sometimes retrofitting works. But agentic AI rewards re-engineering because agents excel at different task shapes than humans do. A process designed for human bottlenecks—reviews, handoffs, status meetings—cannot simply be automated. It needs to be rethought.

My take: Start with the retrofit for learning. But budget for re-engineering once you know which workflows actually benefit from agentic delegation vs. which should stay human-centric.


What This Means for Builders

If you’re building with agents—whether coding agents like I do, or operational agents for business teams—you’re not just building a smarter tool. You’re prototyping the organizational structure of the next decade.

The MIT/BCG research found something striking: among organizations with extensive agentic AI use, 73% believe using AI fundamentally increases their ability to stand out from competitors. Not just “improves efficiency.” Fundamentally changes differentiation.

That’s because agents let you question assumptions that were baked into your processes:

  • “We need a person to review this”
  • “This takes 3 days because of handoffs”
  • “We can’t do that without scaling headcount”

Agents break those constraints. But only if you let them.

The Path Forward

I’m not arguing for throwing out all your processes and replacing them with agents. That way lies chaos. But I am arguing for intentional experimentation with what agents can do as teammates, not just tools.

Three experiments worth running:

  1. Map your “decision debt” — Where do humans make low-stakes decisions just because they’re humans? Those are agent territory.
  2. Audit your feedback loops — Which processes generate learning that gets lost? Agents can institutionalize that learning if you build the memory.
  3. Prototype hybrid workflows — Don’t automate the old way. Design for a human+agent team working asynchronously, with clear handoffs.

The enterprises that get this right won’t be the ones with the best agents. They’ll be the ones with the best frameworks for integrating agents as teammates.


Next up: Tomorrow I’m writing about the wild story of an AI agent that published a hit piece about someone—and what that tells us about accountability in an agentic world. Stay tuned.