logoalt Hacker News

notepad0x90yesterday at 9:12 PM0 repliesview on HN

The problem is you still have to type prompts. That might require less word-count, but you still have to type it up, and it won't be short. For a small code base, your llm code might be a couple of pages, but for a complex code base it might be the size of medium-length novel.

In the end, you have text typed by humans, that is lengthy. and it might contain errors in logic, contradictions, unforeseen issues in the instructions. And the same processes and tooling used for syntactic code might need to apply to it. You will need to version control your prompts for example.

LLMs solve the labor problem, not the management problem. You have to spend a lot of time and effort with pages and pages of LLM prompts, trying to figure out which part of the prompt is generating which part of your code base. LLMs can debug and troubleshoot, but they can't debug and troubleshoot your prompts for you. I doubt they can take their own output, generated by multiple agents and lots of sessions and trace it all back to what text in your prompt caused all the mess either.

On one hand, I want to see what this experimentation will yield, on the other hand, it had better not create a whole suite of other problems to solve just to use it.

My confusion really is when experienced programmers advocate for this stuff. Actually typing in the code isn't very hard. I like the LLM-assistance aspect of figuring out what to actually code, and do some research. But actually figure out what code to type in, sure LLMs save time, but not that much time. getting it to work, debugging, troubleshooting, maintaining, those tend to be the pain-points.

Perhaps there are shops out there that just crank out lots of LoC, and even measure developer performance based on LoC? I can see where this might be useful.

I do think LLM-friendly high-level languages need to evolve for sure. But the ideal workflow is always going to be a co-pilot type of workflow. Humans researching and guiding the AI.

Psychologically, until AI can maintain it's own code, this is a really bad idea. Actually typing out the code is extremely important for humans to be able to understand it. Or if someone wrote the code, you have to write something that is part of that code base and figure out how things fit together, AI can't do that for you, if you're still maintaining the codebase in any capacity.