logoalt Hacker News

Trusted access for the next era of cyber defense

81 pointsby surprisetalkyesterday at 8:07 PM61 commentsview on HN

Comments

Avicebronyesterday at 10:40 PM

I don't think they've added enough cyber. My cyber workflow demands more trusted access for cyber so that I can use these cyber-permissive models for my cybersecurity.

show 4 replies
alophayesterday at 9:05 PM

That's a lot of waffle to try and say 'we've got a really scary next model coming too real soon, promise!'

show 1 reply
ofjcihenyesterday at 8:59 PM

I love that in the era of having LLMs summarize everything all of these companies have opted for what I call the “YouTube streamer apology video” tone and length for these announcements.

These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.

show 1 reply
mikewarottoday at 6:52 AM

It's important to keep perspective, the holes that everyone (including LLMs now) keep finding in pretty much everything are mostly the fault of running things with ambient authority, instead of using systems based on default deny, and capabilities.

I used to think we were 20 years away from a shift to Capabilities based Operating Systems, which were ----> this <---- close to being adopted widely when the PC revolution swiped them aside.

Unfortunately, I think we're about to repeat history, and we're now 20+ years out from actually solving things, AGAIN. 8(

show 2 replies
gavinrayyesterday at 9:38 PM

I completed the "Trusted Access" verification, but it seems to have unlocked nothing in the OpenAI API or Codex models.

Just FYI for others.

show 2 replies
bunnywantsplutoyesterday at 9:31 PM

It seems like local LLMs will get popular for cybersecurity if this trend of locking access to models continues.

show 1 reply
iammjmyesterday at 9:20 PM

"trusted" + openai just simply doesn't compute for me any more

Havocyesterday at 9:30 PM

>democratized access

>partner with a limited set of organizations for more cyber-permissive models.

I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement

show 1 reply
greatgibyesterday at 11:43 PM

All of that reminds me about how gpt2 was almost too dangerous to be released to the world...

show 2 replies
2001zhaozhaoyesterday at 9:58 PM

Requiring verified access is a good idea to mitigate risks from hacking while still giving people access to the latest models. Take notes, Anthropic.

show 1 reply
nullcyesterday at 11:51 PM

Make cyber not cyber.

CompoundEyestoday at 1:13 AM

Wonder if Cyber would’ve caught the Claude Code source map leak?

mmoossyesterday at 9:21 PM

This approach means only a tiny portion of the population will every qualify. Doesn't that make everyone else beholden to those few, who are beholden to OpenAI?

Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.

KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.

show 2 replies
rishabhaiovertoday at 1:24 AM

I mean Anthropic clearly wins with the name (Mythos vs 'GPT-5.4-Cyber')

zb3yesterday at 9:33 PM

> Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.

Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.

Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.

Phelinofistyesterday at 9:19 PM

Sounds totally reasonable to trust OpenAI and the sociopath sama.

spacebaconyesterday at 9:32 PM

[flagged]

show 1 reply
ACCount37yesterday at 9:31 PM

Too little too late. OpenAI's shit was nearly worthless for cybersec for what, a year already?

ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.

And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.

What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.

show 2 replies