> because token capacity is neither unlimited nor free.
This is like dissing software from 2004 because it used 2gb extra memory.
In the last year, token context window increased by about 100x and halved in cost at the same time.
If this is the crux of your argument, technology advancement will render it moot.
> In the last year, token context window increased by about 100x and halved in cost at the same time.
So? It's nowhere close to solving the issue.
I'm not anti-LLM. I'm very senior at a company that's had an AI-centric primary product since before the GPT explosion. But in order to navigate what's going on now, we need to understand the strengths and weaknesses of the technology currently, as well as what it's likely to be in the near, medium, and far future.
The cost of LLMs dealing with their own generated multi-million LOC systems is very unlikely to become tractable in the near future, and possibly not even medium-term. Besides, no-one has yet demonstrated an LLM-based system for even achieving that, i.e. resolving the technical debt that it created.
Don't let fanboism get in the way of rationality.