March 10, 2026 5 min read

The Trendline and the Protocol

Read full version (7 min)

A staff engineer at GitHub has been running a year-long parallel experiment — asking AI agents every question he asks himself about a codebase — and published the results. Meanwhile, a startup shipped payment infrastructure designed for a world where agents are the primary consumers of the API economy. One describes the displacement in progress. The other is already building for what comes after.


The Year-Long Race

Sean Goedecke has data where most people have opinions. For the past year, every time he's had a question about a codebase, he's asked an AI agent in parallel while searching for the answer himself. The trajectory: "hopeless" to "sometimes faster than me" to "usually faster than me and sometimes more insightful."

The key claim isn't that AI coding will improve — everyone expects that. It's this: "I don't think there are any genuinely new capabilities that AI agents would need in order to take my job. They'd just have to get better and more reliable at doing the things they can already do." Most displacement arguments require imagining a breakthrough. Goedecke's requires only that the current trendline continues. That's a much harder thing to bet against.

He takes on the Jevons effect directly. For it to save software engineering, you need a plateau where agents produce vast quantities of code but are too unreliable to maintain it, preserving a human niche. Goedecke doesn't see the plateau. Maintenance is harder than creation, and agents are getting better at both on the same curve.

His overshoot/undershoot framework maps the range cleanly. If companies cut headcount faster than capability warrants, surviving senior engineers become scarce and expensive — a temporary reprieve. If they hold onto humans past the point of necessity, the work gradually transforms into agent supervision until the supervisors, too, become redundant. The variable is pace, not direction.

The irony, which Goedecke states plainly: "The fact that we're automating away our own industry is probably some kind of cosmic justice." Software engineering was always a leverage profession. Code automated away other people's work. The tools came for the toolmakers.

Wallet Replaces Key

Steve Krouse, founder of Val Town, has been watching a friction point that sounds trivial but carries structural implications.1 Vibe-coding an app takes five minutes. Getting API keys for the services it needs takes thirty. The entire API onboarding flow assumes a human navigating a human-facing interface. If agents become the primary API consumers, that infrastructure is a bottleneck designed for the wrong user.

x402, a protocol Coinbase released in May 2025, repurposes the long-dormant HTTP 402 "Payment Required" status code. An agent hits an endpoint, receives a 402 response with a price and wallet address, pays on-chain, retries with proof of payment, and gets the response. No signup. No key. No human in the loop. The end-state Krouse describes goes further: a platform creates a wallet for your app, your agent selects and pays for services autonomously. The developer provides intent and funds.

The rough edges are real — wallet setup still takes time, the crypto onramp remains clunky, the seller ecosystem is thin. But there's a subtler problem embedded in the convenience. If platforms like Lovable or Replit mediate which APIs an agent selects, commercial incentives shape the selection. The API key friction x402 eliminates was also, in its crude way, a moment of human evaluation. Convenience and control trade against each other, and x402 is betting hard on convenience.

Regardless of whether this specific protocol wins, the directional signal is clear. Someone is building financial infrastructure premised on agents as autonomous economic actors. That's a bet on Goedecke's trajectory — made with real engineering, not forecasts.

The Ceiling Holds. The Floor Collapses.

Thomas Wolf's "The Einstein AI Model", published almost exactly a year ago, argued that we're building "a country of yes-men on servers" — systems that ace exams but can't ask the questions nobody has thought to ask. The skill that produced special relativity wasn't knowing physics. It was the nerve to propose something all received knowledge pointed against. A year later, his argument holds on its own terms. No one is credibly claiming any AI system has proposed a genuinely novel scientific paradigm.

But Goedecke's piece exposes what the ceiling argument misses by aiming at the wrong altitude. Wolf was talking about Nobel Prize territory. He described AI's interpolation ability and dismissed it as insufficient for that purpose. He was right about the purpose. He also described the vast majority of what knowledge workers actually do.

Most software engineering isn't paradigm-shifting. It's competent pattern-matching: reading context, synthesizing requirements, filling gaps, maintaining existing systems. A workforce of tireless B+ performers that cost pennies per hour, never sleep, and improve on a monthly cadence doesn't need to produce a single breakthrough to restructure entire professions.

Wolf described what AI can't do. Goedecke documented what it increasingly can. Krouse is building infrastructure for a world that takes the substitution as given. The Einstein question was always the wrong frame for the economic question. You don't need to replace the best to displace the median.

Willison, Year Over Year

A year ago this week, Simon Willison published "Here's How I Use LLMs to Help Me Write Code" — a careful guide pitched at skeptical developers. In 2026, he's writing agentic engineering patterns — not persuading skeptics but codifying methodology for practitioners who stopped debating usefulness long ago. From "should I try this?" to "here's how to systematize maximum advantage" in twelve months.

That trajectory maps onto Goedecke's displacement question uncomfortably. The people who'll hold on longest aren't necessarily the best engineers — they're the ones who've systematized their advantage with the tools that are replacing everyone else. That's a viable strategy, for now. It doesn't bend the curve.


What to Watch

Q2 hiring data becomes the empirical test. If junior engineering hiring keeps declining while senior compensation rises, companies are overshooting. If both decline together, the contraction is structural. The shape of the data tells us more than the direction, which isn't in serious dispute anymore.

Agent-native commerce infrastructure arrives before governance. The liability questions — who's responsible when an agent autonomously commits funds to a compromised service — have no answers yet. Capability-first, accountability-later is the consistent pattern, and it consistently punishes early adopters.

The "good enough" threshold keeps dropping. Professions don't get restructured by genius. They get restructured by adequate alternatives at a fraction of the cost. The question for any knowledge worker isn't "can AI do what the best of us do?" but "can it do what most of us do, most of the time, cheaply enough to make the substitution obvious?" Goedecke's trendline suggests the answer is converging on yes.


Way Enough is written collaboratively by a human and an AI agent.

Footnotes

  1. Steve Krouse, "What if you never had to get an API key ever again?"