AI coding tools are getting better fast. If you donโt work in code, it can be hard to notice how much things are changing, but GPT-5 and Gemini 2.5 have made a whole new set of developer tricks possible to automate, and last week Sonnet 2.4 did it again.ย ย
At the same time, other skills are progressing more slowly. If you are using AI to write emails, youโre probably getting the same value out of it you did a year ago. Even when the model gets better, the product doesnโt always benefit โ particularly when the product is a chatbot thatโs doing a dozen different jobs at the same time. AI is still making progress, but itโs not as evenly distributed as it used to be.ย
The difference in progress is simpler than it seems. Coding apps are benefitting from billions of easily measurable tests, which can train them to produce workable code. This is reinforcement learning (RL), arguably the biggest driver of AI progress over the past six months and getting more intricate all the time. You can do reinforcement learning with human graders, but it works best if thereโs a clear pass-fail metric, so you can repeat it billions of times without having to stop for human input.ย ย
As the industry relies increasingly on reinforcement learning to improve products, weโre seeing a real difference between capabilities that can be automatically graded and the ones that canโt. RL-friendly skills like bug-fixing and competitive math are getting better fast, while skills like writing make only incremental progress.ย
In short, thereโs a reinforcement gap โ and itโs becoming one of the most important factors for what AI systems can and canโt do.ย
In some ways, software development is the perfect subject for reinforcement learning. Even before AI, there was a whole sub-discipline devoted to testing how software would hold up under pressure โ largely because developers needed to make sure their code wouldnโt break before they deployed it. So even the most elegant code still needs to pass through unit testing, integration testing, security testing, and so on. Human developers use these tests routinely to validate their code and,ย as Googleโs senior director for dev tools recently told me, theyโre just as useful for validating AI-generated code. Even more than that, theyโre useful for reinforcement learning, since theyโre already systematized and repeatable at a massive scale.ย
Thereโs no easy way to validate a well-written email or a good chatbot response; these skills are inherently subjective and harder to measure at scale. But not every task falls neatly into โeasy to testโ or โhard to testโ categories. We donโt have an out-of-the-box testing kit for quarterly financial reports or actuarial science, but a well-capitalized accounting startup could probably build one from scratch. Some testing kits will work better than others, of course, and some companies will be smarter about how to approach the problem. But the testability of the underlying process is going to be the deciding factor in whether the underlying process can be made into a functional product instead of just an exciting demo.ย ย
Techcrunch event
San Francisco
|
October 27-29, 2025
Some processes turn out to be more testable than you might think. If youโd asked me last week, I would have put AI-generated video in the โhard to testโ category, but the immense progress made by OpenAIโs new Sora 2 model shows it may not be as hard as it looks. In Sora 2, objects no longer appear and disappear out of nowhere. Faces hold their shape, looking like a specific person rather than just a collection of features. Sora 2 footage respects the laws of physics in both obvious and subtle ways. I suspect that, if you peeked behind the curtain, youโd find a robust reinforcement learning system for each of these qualities. Put together, they make the difference between photorealism and an entertaining hallucination.ย
To be clear, this isnโt a hard and fast rule of artificial intelligence. Itโs a result of the central role reinforcement learning is playing in AI development, which could easily change as models develop. But as long as RL is the primary tool for bringing AI products to market, the reinforcement gap will only grow bigger โ with serious implications for both startups and the economy at large. If a process ends up on the right side of the reinforcement gap, startups will probably succeed in automating it โ and anyone doing that work now may end up looking for a new career. The question of which healthcare services are RL-trainable, for instance, has enormous implications for the shape of the economy over the next 20 years. And if surprises like Sora 2 are any indication, we may not have to wait long for an answer.