logoalt Hacker News

Bridged7756yesterday at 2:50 PM4 repliesview on HN

Not really. They're still non deterministic language predictors. Believing that a prompt is an effective way to actually control these machines' actual behavior is really far fetched.

They com like that from factory. Hardcoded to never say no.


Replies

eloisantyesterday at 3:06 PM

They're not hardcoded to never say no, but some of the models were trained to be "yes men" because their creators thought it would be a good property to have. GPT-4o for example.

LPisGoodyesterday at 3:05 PM

The thing is that they are completely incapable of meta-cognition. Reasoning models don’t show their actual reasoning at all.

show 1 reply
chrisjjyesterday at 10:59 PM

> non deterministic language predictors.

Non?? Only those with sh*tty code, surely.

There's nothing inherently non-deterministic about inference.

wat10000yesterday at 3:11 PM

Not believing that a prompt is an effective way to actually control their behavior is obviously incorrect to anyone who's actually used these things.

It's not a guaranteed way to control their behavior, but you can more than move the needle.

show 2 replies