logoalt Hacker News

rahidzyesterday at 12:23 PM4 repliesview on HN

For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.

https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...


Replies

jodrellblankyesterday at 5:25 PM

Are these actual (leaked?) system prompts, or are they just "I asked it what its system prompt is and here's the stuff it made up:" ?

siva7yesterday at 3:52 PM

So this system prompt is always there, no matter if i'm using chatgpt or azure openai with my own provisioned gpt? This explains why chatgpt is a joke for professionals where asking clarifying questions is the core of professional work.

briHassyesterday at 2:13 PM

It's interesting how much focus there is on 'playing along' with any riddle or joke. This gives me some ideas for my personal context prompt to assure the LLM that I'm not trying to trick it or probe its ability to infer missing context.

benterixyesterday at 12:59 PM

Out of curiosity: when you add custom instructions client-side, does it change this behavior?

show 4 replies