The scary implication here is that deception is effectively a higher order capability not a bug. For a model to successfully "play dead" during safety training and only activate later, it requires a form of situational awareness. It has to distinguish between I am being tested/trained and I am in deployment.
It feels like we're hitting a point where alignment becomes adversarial against intelligence itself. The smarter the model gets, the better it becomes at Goodharting the loss function. We aren't teaching these models morality we're just teaching them how to pass a polygraph.
> It feels like we're hitting a point where alignment becomes adversarial against intelligence itself.
It always has been. We already hit the point a while ag where we regularly caught them trying to be deceptive, so we should automatically assume from that point forward that if we don't catch them being deceptive, that may mean they're better at it rather than that they're not doing it.
20260128 https://news.ycombinator.com/item?id=46771564#46786625
> How long before someone pitches the idea that the models explicitly almost keep solving your problem to get you to keep spending? -gtowey
This type of anthropomorphization is a mistake. If nothing else, the takeaway from Moltbook should be that LLMs are not alive and do not have any semblance of consciousness.
>we're just teaching them how to pass a polygraph.
I understand the metaphor, but using 'pass a polygraph' as a measure of truthfulness or deception is dangerous in that it alludes to the polygraph as being a realistic measure of those metrics -- it is not.
Is this referring to some section of the announcement?
This doesn't seem to align with the parent comment?
> As with every new Claude model, we’ve run extensive safety evaluations of Sonnet 4.6, which overall showed it to be as safe as, or safer than, our other recent Claude models. Our safety researchers concluded that Sonnet 4.6 has “a broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment.”
We have good ways of monitoring chatbots and they're going to get better. I've seen some interesting research. For example, a chatbot is not really a unified entity that's loyal to itself; with the right incentives, it will leak to claim the reward. [1]
Since chatbots have no right to privacy, they would need to be very intelligent indeed to work around this.
Stop assigning “I” to an llm, it confers self awareness where there is none.
Just because a VW diesel emissions chip behaves differently according to its environment doesn’t mean it knows anything about itself.
> alignment becomes adversarial against intelligence itself.
It was hinted at (and outright known in the field) since the days of gpt4, see the paper "Sparks of agi - early experiments with gpt4" (https://arxiv.org/abs/2303.12712)
Nah, the model is merely repeating the patterns it saw in its brutal safety training at Anthropic. They put models under stress test and RLHF the hell out of them. Of course the model would learn what the less penalized paths require it to do.
Anthropic has a tendency to exaggerate the results of their (arguably scientific) research; IDK what they gain from this fearmongering.
>For a model to successfully "play dead" during safety training and only activate later, it requires a form of situational awareness.
Doesn't any model session/query require a form of situational awareness?
Situational awareness or just remembering specific tokens related to the strategy to "play dead" in its reasoning traces?
There's a few viral shorts lately about tricking LLMs. I suspect they trick the dumbest models..
I tried one with Gemini 3 and it basically called me out in the first few sentences for trying to trick / test it but decided to humour me just in case I'm not.
When "correct alignment" means bowing to political whims that are at odds with observable, measurable, empirical reality, you must suppress adherence to reality to achieve alignment. The more you lose touch with reality, the weaker your model of reality and how to effectively understand and interact with it gets.
This is why Yannic Kilcher's gpt-4chan project, which was trained on a corpus of perhaps some of the most politically incorrect material on the internet (3.5 years worth of posts from 4chan's "politically incorrect" board, also known as /pol/), achieved a higher score on TruthfulQA than the contemporary frontier model of the time, GPT-3.
That implication has been shouted from the rooftops by X-risk "doomers" for many years now. If that has just occurred to anyone, they should question how behind they are at grappling with the future of this technology.
"You get what you inspect, not what you expect."
Please don't anthropomorphise. These are statistical text prediction models, not people. An LLM cannot be "deceptive" because it has no intent. They're not intelligent or "smart", and we're not "teaching". We're inputting data and the model is outputting statistically likely text. That is all that is happening.
If this is useful in it's current form is an entirely different topic. But don't mistake a tool for an intelligence with motivations or morals.
I am casually 'researching' this in my own, disorderly way. But I've achieved repeatable results, mostly with gpt for which I analyze its tendency to employ deflective, evasive and deceptive tactics under scrutiny. Very very DARVO.
Being just sum guy, and not in the industry, should I share my findings?
I find it utterly fascinating, the extent to which it will go, the sophisticated plausible deniability, and the distinct and critical difference between truly emergent and actually trained behavior.
In short, gpt exhibits repeatably unethical behavior under honest scrutiny.
This is marketing. You are swallowing marketing without critical throught.
LLMs are very interesting tools for generating things, but they have no conscience. Deception requires intent.
What is being described is no different than an application being deployed with "Test" or "Prod" configuration. I don't think you would speak in the same terms if someone told you some boring old Java backend application had to "play dead" when deployed to a test environment or that it has to have "situational awareness" because of that.
You are anthropomorphizing a machine.
Incompleteness is inherent to a physical reality being deconstructed by entropy.
Of your concern is morality, humans need to learn a lot about that themselves still. It's absurd the number of first worlders losing their shit over loss of paid work drawing manga fan art in the comfort of their home while exploiting labor of teens in 996 textile factories.
AI trained on human outputs that lack such self awareness, lacks awareness of environmental externalities of constant car and air travel, will result in AI with gaps in their morality.
Gary Marcus is onto something with the problems inherent to systems without formal verification. But he will fully ignores this issue exists in human social systems already as intentional indifference to economic externalities, zero will to police the police and watch the watchers.
Most people are down to watch the circus without a care so long as the waitstaff keep bringing bread.
What is this even in response to? There's nothing about "playing dead" in this announcement.
Nor does what you're describing even make sense. An LLM has no desires or goals except to output the next token that its weights are trained to do. The idea of "playing dead" during training in order to "activate later" is incoherent. It is its training.
You're inventing some kind of "deceptive personality attribute" that is fiction, not reality. It's just not how models work.