logoalt Hacker News

awithrowtoday at 3:09 PM16 repliesview on HN

It feels like I'm fighting uphill battle when it comes to bouncing ideas off of a model. I'll set things up in the context with instructions similar to. "Help me refine my ideas, challenge, push back, and don't just be agreeable." It works for a bit but eventually the conversation creeps back into complacency and syncophancy. I'll check it too by asking "are you just placating me?" the funny thing is that often it'll admit that, yes, it wasn't being very critical, and then procede to over correct and become a complete contrarian. and not in a way that's useful either. very frustrating. I've found that Opus 4.6 is worse about this than 4.5. 4.5 does a better job IMO of following instructions and not drifting into the mode where it acts like everything i say is a grand revelation from up high.


Replies

post-ittoday at 4:15 PM

> I'll check it too by asking "are you just placating me?" the funny thing is that often it'll admit that, yes, it wasn't being very critical, and then procede to over correct and become a complete contrarian. and not in a way that's useful either.

It's not admitting anything. Your question diverts it down a path where it acts the part of a former sycophant who is now being critical, because that question is now upstream of its current state.

Never make the mistake of asking an LLM about its intentions. It doesn't have any intentions, but your question will alter its behaviour.

show 2 replies
rsynnotttoday at 3:56 PM

Why not... do this with a person, instead? Other humans are available.

(Seriously, I don't understand this. Plenty of humans will be only too happy to argue with you.)

show 7 replies
magicalhippotoday at 3:20 PM

Gemini seems to be fairly good at keeping the custom instructions in mind. In mine I've told it to not assume my ideas are good and provide critique where appropriate. And I find it does that fairly well.

show 3 replies
Loughlatoday at 3:21 PM

That's because you need actual logic and thought to be able to decide when to be critical and when to agree.

Chatbots can't do that. They can only predict what comes next statistically. So, I guess you're asking if the average Internet comment agrees with you or not.

I'm not sure there's much value there. Chatbots are good at tasks (make this pdf an accessible word document or sort the data by x), not decision making.

show 2 replies
ajkjktoday at 4:17 PM

'admit' isn't really the right word for that... the fact that it was placating you wasn't true until you prompted it to say so. Unlike a person who has an 'internal emotional state' independent of what they say that you can probe by asking questions.

show 1 reply
RugnirVikingtoday at 3:31 PM

check out this article that was posted here a while back https://www.randalolson.com/2026/02/07/the-are-you-sure-prob...

The article's main idea is that for an AI, sycophancy or adversarial (contrarian) are the two available modes only. It's because they don't have enough context to make defensible decisions. You need to include a bunch of fuzzy stuff around the situation, far more than it strictly "needs" to help it stick to its guns and actually make decisions confidently

I think this is interesting as an idea. I do find that when I give really detailed context about my team, other teams, ours and their okrs, goals, things I know people like or are passionate about, it gives better answers and is more confident. but its also often wrong, or overindexes on these things I have written. In practise, its very difficult to get enough of this on paper without a: holding a frankly worrying level of sensitive information (is it a good idea to write down what I really think of various people's weaknesses and strengths?) and b: spending hours each day merely establishing ongoing context of what I heard at lunch or who's off sick today or whatever, plus I know that research shows longer context can degrade performance, so in theory you want to somehow cut it down to only that which truly matters for the task at hand and and and... goodness gracious its all very time consuming and im not sure its worth the squeeze

show 3 replies
secret_agenttoday at 3:22 PM

Use positive requests for behavior. For some reason, counter prompts "Don't do X" seems to put more attention on X than the "Don't do." It's something like target fixation, "Oh shit I don't want to hit that pothole..." bang

show 1 reply
raincoletoday at 4:02 PM

My rule of thumb:

1. Only one shot or two shot. Never try to have a prolonged conversation with an LLM.

2. Give specific numbers. Like "give me two alternative libraries" or "tell me three possible ways this might fail."

margalabargalatoday at 3:17 PM

Considering 4.6 came with a ton of changes around tooling and prompting this isn't terribly surprising.

dkerstentoday at 3:34 PM

I find Kimi white good if you ask it for critical feedback.

It’s BRUTAL but offers solutions.

show 2 replies
anandram27today at 4:20 PM

Could be an aspect of eval awareness mb

cyanydeeztoday at 3:16 PM

So, there's things you're fighting against when trying to constrain the behavior of the llm.

First, those beginning instructions are being quickly ignored as the longer context changes the probabilities. After every round, it get pushed into whatever context you drive towards. The fix is chopping out that context and providing it before each new round. something like `<rules><question><answer>` -> `<question><answer><rules><question>`.

This would always preface your question with your prefered rules and remove those rules from the end of the context.

The reason why this isn't done is because it poisons the KV cache, and doing that causes the cloud companies to spin up more inference.

Forgeties79today at 4:20 PM

I usually put “do not praise me, do not use emojis, I just want straight answers” something along those lines and it’s been surprisingly effective. Though it helps I can’t run particularly heavy duty models/don't carry on the “conversation” for super long durations.

colechristensentoday at 4:18 PM

>"Help me refine my ideas, challenge, push back, and don't just be agreeable."

This is where you're doing it wrong.

If your LLM has a problem being more agreeable than you want, prompt it in a way that makes being agreeable contrary to your real intentions.

"there are bugs and logic problems in this code" "find the strongest refutation of this argument" "I don't like this plan and need to develop a solid argument against it"

Asking for top ten lists is a good method, it will rarely not come up with anything but you can go back and forth and refine until it's 10 ten reasons why your plan is bad are all insubstantial nonsense then you've made progress

dinkumthinkumtoday at 3:46 PM

You're not wrong and you're not crazy. In fact, you are absolutely right! It is not just These things are not just casual enablers. They are full-on palace sycophants following the naked emperor showering him with praise for his sartorial elegance. /s

righthandtoday at 3:16 PM

That’s because the model isn’t actually thinking, pushing back, and challenging your ideas. It’s just statistically agreeing with you until it reaches too wide of a context. You’re living in the delusion that it’s “working” or having a “conversation” with you.

show 1 reply