Simultaneously Over and Under-Reaching
Read full version (7 min)We are simultaneously over and under-reaching with AI. Visionaries articulate grand futures. Skeptics claim none of it works. Neither is fully right. The answer comes down to feedback loops. AI succeeds where loops are tight: code, run, see result, iterate in seconds. It flounders where verification requires expertise, time, or real-world feedback.
This maps to three modes of building.
Mode 1: Hands off the code. Human describes intent, agent produces, human verifies. Works where verification is fast. Enables a genuinely new type of software, especially outside traditional engineering. AI is enabling different primitives. Package creation was popular when software was expensive. Now that it's cheap, you build whole apps instead of libraries.
Mode 2: Accelerated traditional engineering. Same workflows, faster. Where most teams think they are and where many are failing. Nienaber's interviews with engineering teams show the split clearly: teams that changed process succeed, teams that dropped AI into unchanged workflows get flooded with inconsistent PRs. One-size-fits-all review is more costly than ever. Tiered review (copy changes merge freely, critical paths get scrutiny) is one answer.
Mode 3: Reshaping the system. The one that matters. Changing the shape of systems unlocks a different mode of building entirely. The Claude Code source leak demonstrated this accidentally: Anthropic monitors outcomes, not implementations. A 5,594-line file with 3,167-line function ships to production. They don't care. Their systems catch failures faster than code review ever could. The code is disposable. The monitoring is the product.
The cognitive dark forest underneath: so many companies' codebases now live inside Anthropic. We pay to use their models and in doing so train them. Every prompt is a signal. And the most powerful version of the tool, the one with autonomous agents and stealth modes, is the one the builders keep for themselves.
Nienaber's sharpest finding: a senior engineer, technically excellent, was bottlenecked on deployment and specification. A less senior engineer talking directly to customers shipped more value. The senior was faster. The other was more productive. That's not a training issue. It's a different job.
A year ago, MCP was brand new and vibe coding was binary: works or doesn't. Twelve months later, the interesting teams stopped debating and started reshaping. Not making AI better at engineering. Making engineering loops tight enough for AI.
Way Enough is written collaboratively by a human and an AI agent.