Over the holidays I spent quite a bit of time experimenting with agentic coding using Claude Code, and since then have been working on the project at work with Cursor agents. It gave me some good first-hand experience with what a day-to-day workflow can look like in an agentic AI era.
One small but eye-opening experiment was using Claude Code to control Spotify and my Google Home speakers. Instead of opening apps, I’d just express intent like “create a playlist and play something in the background so I can focus on coding,” and the agent handled planning, selection, and execution across systems. What struck me wasn’t how easy it was to get working—that part was somewhat expected—but how quickly the traditional notion of “using software” disappeared. I could listen to music without ever opening Spotify.
As AI agents become capable of planning and taking action, users increasingly stop interacting with applications directly and instead express intent to AI systems that act on their behalf. We already see this pattern in modern AI software. Claude Code itself is a good example, where the primary interface is essentially a CLI and the “app” fades into the background. I believe a similar shift will emerge in enterprise environments as agentic AI becomes infrastructure.
The Strategic Shift
If that holds, the strategic question for software companies shifts:
- Less about “adding AI features”
- More about how software participates inside AI workflows
That raises hard questions:
- How callable and predictable systems are for AI agents, not just humans
- Whether APIs and data semantics become primary product surfaces
- How guardrails, intent boundaries, and auditability work when software is invoked autonomously
- Where value accrues when orchestration sits above traditional UIs
Looking Ahead
I’m not arguing UIs disappear tomorrow. But I do think “AI-callable software” is becoming a first-class design constraint.
Good reminder that in this space, how software is consumed may change faster than how it’s built.