If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.
This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.
More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.
And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?
Well - you converge to a system, but do that by pruning what you don't want.
If you care about maintainability and quality (and I include maintaining using LLM based tools) then you need to understand what it does (in doing so you will find lots of things for it to fix - you'll probably find that the architecture it's chosen is not right for what you want too).
So the infinite monkeys with infinite typewriters approach.
> If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.
No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.
With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly.
We saw a similar philosophy in TDD advocacy many years ago. Search for something like "Sudoku Jeffries" to see how that went. Then search for "Sudoku Norvig" to see what it looks like when you actually understand the problem.
The idea that you can somehow iterate your way to a solution when you have no idea where you're trying to go or even which direction your next step should be in has always seemed absurd to some of us but in the era of LLMs there's no longer any doubt. In the agentic era (can we call a few months an "era"?) I estimate that 90% or more of the writing I've read about how to use agents most effectively came down to making sure there is a clear specification for what they need to implement first and then imposing extensive guard rails to make sure their output does in fact follow that specification. It's all about doing enough design work up front to remove any ambiguity before coding the next part of the implementation and almost everyone claiming any sort of real world success with coding agents seems to have reached a similar conclusion.