Around March of this year, my wife and I were driving somewhere. I turned to her and said, "What should we do if in two to five years all of my skills are economically worthless?"

Looking back on 2025, I'm realizing this may turn out to have been a transformational year. Of course, it's also likely that I will look back on this post in ten years and think how naive and foolish I was. So far though, the signs look to me to be on the side of radical transformation.

For me there were two things that happened around March that shook my worldview and led to that conversation with my wife.

First, I tried Github Copilot's agent mode.

My experience with vibe coding to that point had been mostly "this is pretty neat, but it is fundamentally limited." But I also noticed that if you ask an LLM to write some code, compile it, and paste the error messages into the chat, the LLM could often fix them. It makes sense. Most of my code doesn't work the first time either, but by iterating with the tools at my disposal I can usually get something working. Agent mode automated this copy-paste loop and made it feel like a bot that could complete well-defined units of work on its own. That was the moment that made me think there might really be something here.

Agent mode demonstrates something else that's powerful: it creates an environment where self-play is possible. The last few years' progress in AI has mostly come from training on larger and larger data sets. By now, we have trained on essentially the entire intellectual output of the human race. There just doesn't seem to be enough data to keep making gains in the same way. On the other hand, AlphaGo and friends have shown that self-play can work wildly well, so why wouldn't we expect the same if we could apply self-play to LLMs? Coding is basically a game we play against the compiler and the laws of mathematics. It's easy to tell if we've won, because our code compiles, the tests pass, etc. This means AI can create its own training data when working on coding problems by training on traces that led to successfully completing the task.1

Second, I read AI 2027 .

I know the main authors more on the alarmist end of the spectrum, but I think AI 2027 made such a big impact on me because I read it on the heels of being astounded by agent mode. I'll admit, the idea that we'll have superintelligent nanobots in just two years sounds absurd. On the other hand, the individual steps they laid out from here to there do not strike me as beyond the realm of plausibility. One of the things that was really powerful about the AI 2027 scenario was that the timelines are steep enough that months matter. Things happen on a timescale that we can see play out in essentially real time. Even if 2027 is too early, 2030 or 2035 or 2040 would still impact my life in profound ways.

With that in mind, Al 2027 gives me a baseline to evaluate subsequent developments against. It gives me a way to say" are we ahead of schedule or behind?"

We're only about six months out so maybe we shouldn't expect much progress along that scenario. On the other hand, that's about 20% of the whole 2027 timeline, so it seems like a reasonable time to start evaluating some of their predictions. On the quantitative side, my sense is we are behind their compute projections, but perhaps making better than expected algorithmic progress. The qualitative experience seems pretty spot on. We have agents that are able to do surprising things but fail in hilarious and sometimes adorable ways. We're also starting to see AI become a significant political issue. All in all I'd say the overall 2027 scenario does not yet seem fundamentally broken.

This past year I've found myself reflecting often on a conversation I had around 2005. I was in college working on a computer science degree, talking with a friend who worked in an auto parts factory. I said something about how the Internet had completely transformed our way of life. My friend responded, "Maybe for you, but it hasn't made much difference to normal people's lives!" At the time it was a good reminder that I lived in a techno-bubble and that my experiences maybe were not typical. But fast forward 20 years and the internet is everywhere. We do so much shopping online that it's getting harder to even find brick-and-mortor stores.2 We organize social groups through things like Facebook and WhatsApp. We RSVP for weddings online and plan baby showers online. The Internet has been blamed for the downfall of democracy, crippling societal anxiety, and any number of nation- and world-wide issues. I think it's fair to say the Internet has transformed the lives even of non-computer nerds.

I have a feeling now is a similar time with AI. Those of us in the tech industry are positioned to see the strongest early impacts, while for most people AI has not lived up to the hype.3 But I doubt that coding will be AI's last success.

So back to that car ride. My wife asked if I really thought this would happen, that in a few years computers would be able to do all the work I do now. "Probably not," I said, "but it seems like enough of a possibility that it's worth having a plan."


1

I've since learned this is called Reinforcement Learning with Verifiable Rewards (RLVR), and it seems it has provided a lot of the coding progress for recent models.

2

That's my experience, anyway, but I also live in Silicon Valley. Maybe in the rest of the world you can still drive to stores and shop in person.

3

To be fair, it hasn't lived up to the hype for coding agents either, but I think that has more to do with how much hype there is. The coding capabilities of current models genuinely impress me, despite the shortcomings they still have.