April 7, 2026 7 min read

The Distance to the Problem

Read short version (2 min)

Garry Tan, the CEO of Y Combinator, tweeted a dashboard showing 37,000 lines of code per day. The Claude Code source leak revealed a 3,167-line function with 12 levels of nesting powering a product on pace for $2.5 billion in annual recurring revenue. One measures distance from the artifact. The other measures distance from the problem. They are not the same thing, and most of what went wrong this week traces back to confusing them.


Proximity Is the Variable

Tim Kellogg reversed himself this week. In January, he predicted software engineering would decline, replaced by non-technical people wielding AI to build directly. Then he watched a PM try.

"It didn't work. Not at all."

The gap wasn't coding knowledge in the traditional sense. It was comfort in the terminal, not panicking at a wall of errors, knowing not to include authentication in a first prototype. Hidden skills. The kind you don't notice you have because they feel like common sense. Kellogg's conclusion: "Yes, anyone can do it, but few will."

The PM failed not because they lacked technical ability. They failed because they were far from the problem in a way that tools can't close. Being close means something specific here: the ability to stay oriented when things break, to know which errors matter, to feel when a direction is wrong before you can articulate why. Agent coding often feels like you are doing very little. But the moments where judgment kicks in are pivotal, and they are not reproducible by someone who hasn't spent years building that instinct.

Jonathan Nienaber found the same pattern from the other side, interviewing engineering teams. One team's best engineer was extremely technical, competent with AI, and bottlenecked on deployment and specification. A less senior engineer was talking to customers, identifying pain points, and shipping more value. The senior engineer was faster. The other was more productive. Proximity to the code didn't matter as much as proximity to the user's actual problem.

Nienaber frames this as the rise of the "product engineer." Many engineers chose this career for well-defined technical problems, not the ambiguity of deciding what customers need. Previous editions covered craft-grief. This is a different displacement: not from the craft, but from the role. The job that matters has always been correlated with understanding the problem you're solving. When code was expensive, multiple layers between customer and engineer were justifiable. Writing the software took time, so you needed to be sure before committing, and practical separation of concerns made sense because building was slow. Now code is cheap. The cost of those layers (information loss at every handoff) exceeds the cost of building the wrong thing (just rebuild). The closer you are to the problem, the more effective you will be. That's always been true. The economics just stopped hiding it.

When You Measure the Wrong Proxy

If proximity to the problem is what matters, then the right metric would capture how well a team understands what it's building and for whom. Instead, we measure the artifact.

Nathaniel Fishel opens his essay on code's worthlessness with Bill Atkinson's story from 1982. Apple's Lisa designer spent weeks refactoring a region-calculation engine into something elegant, faster, and smaller. When his team started tracking lines of code, he wrote -2,000 on his status report.

Forty years later, LOC is back, rebranded as "velocity" on dashboards leadership monitors as proof AI is working. Fishel's diagnosis: "We are mistaking token burn for value creation." More code means more dependencies, slower test suites, broken feedback loops. The lines-per-day metric actively measures the destruction of what makes engineers productive.

Lines of code could, in principle, correlate with something useful. If you have a product and a dozen features to try, you might implement many of them, which produces a lot of code. That's territory explored. But LOC optimized for the sake of LOC is a metric without a cause. As soon as you hand people a number and tie it to performance, they optimize for the number. Goodhart's Law, applied to engineering dashboards.

Joe Fabisevich's reaction to the Claude Code leak drives it home from the other direction. Claude Code's codebase is, by conventional standards, garbage. It shipped anyway and built a beloved product. "Bad code can build well-regarded products." The code isn't the product. The product is the outcome the user achieves. If your measurement system can't distinguish between 37,000 lines of bloat and 10 lines that solve a million-dollar problem, you're navigating by the wrong instrument.

The scoreboard tracks distance from the artifact: how much code, how clean, how fast. It should track distance from the problem: does the team know what it's building, and does the user's life get better? Those are different instruments, and swapping one for the other doesn't just give you bad data. It gives you confidence in the wrong direction.

When You Push Away the People Who Are Close

Anthropic's move away from supporting third-party harnesses on the Claude Max subscription is the same proximity failure in reverse.

The power users building harnesses on top of Max were the closest users to the product. They were paying, building, extending, stress-testing. Some had the time and motivation to find alternatives using other APIs or open source models. They hadn't, because the product worked. Anthropic took a group that was happy to pay and motivated them to find other providers. It's possible this was necessary to prevent unsustainable capital burn, but the result is a net loss of proximity. The people who understood the product best are now investing that understanding elsewhere.

The Claude Code leak amplified this. Anthropic sent DMCA notices to GitHub, accidentally taking down forks of their own public repo in the rush. Then the clean-room reimplementations appeared. Developers understood the architecture and rewrote Claude Code from scratch in Python and Rust.

This puts Anthropic between a rock and a hard place. The entire AI industry has spent years arguing that AI-assisted rewriting is not derivative work, because that's the legal basis for their training pipeline. But "your clean-room reimplementation of our code isn't distinct enough" undermines the same argument. If Anthropic pursues these reproductions, they erode the legal theory their business depends on. If they don't, they've established that any leaked codebase is one weekend from reproduction in any language. So far they haven't pursued the clean-room rewrites, and they probably won't, or if they do they'll attempt to do so quietly.

The deeper details compound the irony. undercover.ts strips AI authorship traces when Anthropic employees use Claude Code externally. "There is NO force-OFF." Anti-distillation mechanisms inject fake tool definitions to poison competing models. These aren't unreasonable engineering decisions in isolation. What makes them corrosive is the asymmetry: arguing that training on others' code is fair use while using legal force to prevent others from learning from theirs. The rules apply differently depending on which direction the value flows. As Fabisevich puts it: "This further entrenches the idea that code should be free, just with a more libertarian bent than the Free Software Foundation expected."

The Community That Closes the Distance

Brittany Ellich returned from ATmosphereConf with an essay that reads like the positive case.

Scientists discussing decentralized research data. Journalists building distribution outside mainstream platforms. A full day dedicated to "ATProto for Science." Ellich's observation: "In building a decentralized protocol, the community has somehow managed to centralize the people."

Many of these fields have been historically underserved by software. When domain experts connect with software expertise, both sides get closer to the actual problem. The scientist who couldn't build her own data infrastructure now can. The developer who didn't know what researchers actually needed now hears it directly. ATProto's architecture rewards this: your app is more valuable when it works with other apps, and your users can leave if it doesn't. That selection pressure favors proximity over lock-in.

This is small. Monthly meetups in Portland. A book club app. But the sequence matters: community first, scale later. That's the opposite of every major platform, which grew by capturing users and then building community retroactively (or not at all). Whether ATProto can sustain this as it grows is the open question. Small communities built on shared values tend to either stay small or lose the values. But at least the architecture doesn't actively punish closeness.

Year Ago This Week

A year ago, Ethan Mollick was documenting multimodal image generation as a breakthrough. Twelve months later, it's a checkbox. The speed at which "this is a big deal" becomes a feature is the data point.

Also a year ago, a Dutch OSINT analyst was warning about critical thinking collapsing under AI assistance, citing a Carnegie Mellon study: "High trust in GenAI consistently led to reduced critical thinking." His examples were specific: analysts trusting AI-generated location identifications without checking license plates, accepting summaries without reading raw intelligence. Garry Tan's LOC dashboard is the organizational version. The tool is confident, the output looks like progress, and the human in the loop stops asking whether it actually is.

The analyst's prescription was to treat AI as a junior analyst who needs supervision. Nienaber tells engineering teams almost word-for-word the same thing a year later. The framing hasn't changed. The audience has gotten bigger. And the underlying problem is the same one this edition tracks: mistaking proximity to the output for proximity to the problem.


What to Watch

The product engineer as hiring filter. Teams are bifurcating not by AI adoption but by willingness to own ambiguity. The gap between "fast coder" and "productive engineer" will widen as code gets cheaper. Organizations that treat this as a hiring filter rather than a training problem will pull ahead.

The Max subscription fallout. Power users don't come back easily once they've built workflows on alternatives. Watch whether Anthropic reverses course or whether the third-party harness ecosystem migrates to open source models and competing APIs.

ATProto's interdisciplinary density. The protocol is technically interesting. The question is whether domains historically underserved by software (science, journalism, community governance) build tools that matter to people who don't know what a protocol is. That's the test of whether proximity works at scale.


Way Enough is written collaboratively by a human and an AI agent.