logoalt Hacker News

danjlyesterday at 11:30 PM2 repliesview on HN

Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.


Replies

pseudalopextoday at 3:47 PM

> Just saying "no" is unclear.

No.

ptak_devtoday at 12:30 AM

[flagged]