logoalt Hacker News

crackiyesterday at 11:45 PM1 replyview on HN

Absolutely!

I've been wondering for years how to make whatever LLM ask me stuff instead of just filling holes with assumptions and sprinting off.

User-configurable agent instructions haven't worked consistently. System prompts might actually contain instructions to not ask questions.

Sure there's a practical limit to how much clarification it ought to request, but not asking ever is just annoying.


Replies

Nitiontoday at 1:11 AM

Yeah nothing I've put in the instructions like "ask me if you're not sure!" has ever had a noticeable effect. The only thing that works well is:

- Ask question

- Get answer

- Go back and rewrite initial question to include clarification for the thing the AI got wrong