logoalt Hacker News

the_harpia_ioyesterday at 4:55 PM0 repliesview on HN

You're right, and the Moltbook example actually supports the broader point - even Claude Opus with all its alignment training produced insecure code that shipped. The model fallback just widens the gap.

I agree nobody should rely on model alignment for security. My argument isn't "Claude is secure and local models aren't" - it's that the gap between what the model produces and what a human reviews narrows when the model at least flags obvious issues. Worse model = more surface area for things to slip through unreviewed.

But your core point stands: the responsibility is on you regardless of what model you use. The toolchain around the model matters more than the model itself.