logoalt Hacker News

zubspaceyesterday at 10:41 PM2 repliesview on HN

It's so sad that we're the ones who have to tell the agent how to improve by extending agent.md or whatever. I constantly have to tell it what I don't like or what can be improved or need to request clarifications or alternative solutions.

This is what's so annoying about it. It's like a child that does the same errors again and again.

But couldn't it adjust itself with the goal of reducing the error bit by bit? Wouldn't this lead to the ultimate agent who can read your mind? That would be awesome.


Replies

audience_memyesterday at 11:09 PM

> It's so sad that we're the ones who have to tell the agent how to improve by extending agent.md or whatever.

Your improvement is someone else's code smell. There's no absolute right or wrong way to write code, and that's coming from someone who definitely thinks there's a right way. But it's my right way.

Anyway, I don't know why you'd expect it to write code the way you like after it's been trained on the whole of the Internet & the the RLHF labelers' preferences and the reward model.

Putting some words in AGENTS.md hardly seems like the most annoying thing.

tip: Add a /fix command that tells it to fix $1 and then update AGENTS.md with the text that'd stop it from making that mistake in the future. Use your nearest LLM to tweak that prompt. It's a good timesaver.

cactusplant7374yesterday at 10:46 PM

It is not a mind reader. I enjoy giving it feedback because it shows I am in charge of the engineering.

I also love using it for research for upcoming features. Research + pick a solution + implement. It happens so fast.