logoalt Hacker News

ctdinjeu2today at 4:38 PM0 repliesview on HN

It’s not yet possible due to context size limitations.

LLMs can’t retain most codebases nor even most code files accurately - they start making serious mistakes at ~500 lines.

Paste a ~200 line React component or API endpoint, have it fix or add something, it’s fine, but paste a huge file, it starts omitting pieces, making mistakes, and it gets worse as time goes on.

You have to keep reminding it by repeatedly refreshing context with the part in question.

Everyone who has seriously tried knows this.

For this reason alone the LLM “agent” is simply not one. Not yet. It cannot really drive itself and it’s a fundamental limitation of the technology.

Someone who knows more about model architecture might be able to chime in on why increasing the context size will/won’t help agents retain a larger working memory to acceptable degrees of accuracy, but as it stands it’s so limited that it works more like a calculator that you must actively use rather than an autonomous agent.