logoalt Hacker News

ddoolinlast Thursday at 10:54 PM5 repliesview on HN

Same here. I quickly learned that if you merely ask questions about it's understanding or plans, it starts looking for alternatives because my questioning is interpreted as rejection or criticism, rather than just taking the question at face value. So I often (not always) have to caveat questions like that too. It's really been like that since before Claude Code or Codex even rolled around.

It's just strange because that's a very human behavior and although this learns from humans, it isn't, so it would be nice if it just acted more robotic in this sense.


Replies

windwardyesterday at 10:58 AM

Yeah, numerous times I've replied to a comment online, to add supporting context, and it's been interpreted as a retort. So now I prefix them with 'Yeah, '.

fittingoppositeyesterday at 9:22 PM

Very interesting observation. Wondering if anyone ever analyzed the underlying "culture" of LLMs and what this would mean for international users.

show 1 reply
cturhanyesterday at 6:34 PM

The reason is the system prompt they provided. They probably added a clause like “plan user’s requirements… and implement the required code”

WOTERMEONyesterday at 3:41 PM

We're training neutral networks on human content to be human like. We don't have "robotic content"

muyuuyesterday at 1:47 AM

Do what you would do with a person, which is to allocate time for them to produce documentation, and be specific about it.