Not really. They're still non deterministic language predictors. Believing that a prompt is an effective way to actually control these machines' actual behavior is really far fetched.
They com like that from factory. Hardcoded to never say no.
The thing is that they are completely incapable of meta-cognition. Reasoning models don’t show their actual reasoning at all.
> non deterministic language predictors.
Non?? Only those with sh*tty code, surely.
There's nothing inherently non-deterministic about inference.
Not believing that a prompt is an effective way to actually control their behavior is obviously incorrect to anyone who's actually used these things.
It's not a guaranteed way to control their behavior, but you can more than move the needle.
They're not hardcoded to never say no, but some of the models were trained to be "yes men" because their creators thought it would be a good property to have. GPT-4o for example.