Humans in the Loop
A recent article by Dwarkesh nicely clarified something most people miss about LLMs: "The reason humans are so useful is not mainly their raw intelligence. It's their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task."
This insight reveals two critical truths about the current state of AI:
First, humans remain the value creators. Despite GenAI's impressive capabilities, the economic returns flow to skilled operators, not the models themselves. The technology is powerful, but worthless without someone who knows how to wield it effectively.
Second, the adaptation race is just beginning. GenAI evolves so rapidly that continuous learning isn't optional - it's survival. And we're approaching a inflection point where AI systems will learn continuously themselves, fundamentally reshaping competition overnight.
Dwarkesh illustrates this paradox nicely: "Part of the reason some people are too pessimistic is that they haven't played around with the smartest models operating in the domains that they're most competent in."
Watch an expert describe an application and see AI deliver a working prototype in minutes—it's genuinely miraculous. But fumble your prompts, and you'll get garbage just as quickly.
But the real point Dwarkesh is making here is that the AI we have right now is, compared to its near-term potential, abjectly terrible. Moreover, the value we DO get from it comes predominantly from the operators, meaning (again) even more emphasis on having talent that can a) utilize GenAI's current best strengths effectively, and b) continuously adapt and employ new GenAI capacities.
Tactically, it means that your existing strategy must continue to emphasize flexible, evolving, intrinsically motivated talent who have access, time, and support to continuously mess around with the latest models. That's expensive. But the alternative - being late to the table with evolving AI capabilities - is a level of risk most companies cannot afford to ignore.
From Josh Klein's AI Newsletter