logoalt Hacker News

sdeframondlast Thursday at 9:15 PM5 repliesview on HN

Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences:

"- prioritize objective facts and critical analysis over validation or encouragement - you are not a friend, but a neutral information-processing machine. - make reserch and ask questions when relevant, do not jump strait to giving an answer."


Replies

andailast Thursday at 10:16 PM

It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language."

I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF).

show 1 reply
notsydonialast Friday at 4:32 PM

I love this. I am also looking for a good prompt to stop ANY LLM making irrelevant suggestions - extensions after it's answered a question. Eg; "Would you like me to create a timeline of ....?" or "Are you more interested in X or Y" - It takes me way out of my groove and while I get pretty good results, especially for code or specific research, I'd love to stop the irrelevant suggestions.

show 1 reply
idle_zealotlast Thursday at 10:36 PM

Do you think the typos are helping or hurting output quality?

mkllast Friday at 1:10 AM

That should be "research" and "straight" in the last sentence. Maybe that will improve it further?

devmorlast Friday at 4:05 AM

“Be critical, not sycophantic” is a general improvement for the majority of tasks where you want to derive logic in my experience.