logoalt Hacker News

jarjourayesterday at 8:08 PM0 repliesview on HN

Your comment is spot on, but the nuance people who are still new to these LLMs don't yet see is the real reason "he'd be better off just writing the damned book instead."

1. That prompt is always a slot machine. It's never 100% deterministic and that's why we haven't seen an explosion of claude skills. When it works for you, and it's magical, everyone is wowed. However, there is a set of users who then bang their head, wondering why their identical attempt is garbage compared to their coworker. "It must be a skills issue." No, it's just the LLM being an LLM.

2. Coding agents are hyper localized and refuse to consider the larger project when it solves something. So you end up with these "paper cuts" of duplicated functions or classes that do one thing different. Now the LLM in future runs has to decide which of these classes or functions to use and you end up with two competing implementations. Future you will bang your head trying to figure out how to combine them.

3. The "voice" of the code it outputs is trained on public repositories so if your internal codebase is doing something unique, the LLM will consistently pick the voice it's trained on, forcing you to rewrite behind it to match your internal code.

4. It has no chill. If I set any "important" rules in the prompt then it sometimes adheres to it at the expense of doing the "right" thing in its changes. Or it completely ignores it and does its own thing, when it would have been the perfect time to follow the rule. This is to your point that, if I had just written the code myself, it would have been less words than any "perfect" prompt it would have taken to get the same code change.