logoalt Hacker News

GPT-5.5

1474 pointsby rdyesterday at 6:01 PM984 commentsview on HN

Comments

theihtishamtoday at 2:34 AM

i just installed Codex and And Gave try to GPT 5.5 Its Good As compare to previous one

c0rruptbytesyesterday at 10:16 PM

literally cannot launch the codex app anymore

PilotJefftoday at 2:47 AM

So exhausted from all this endless bs…. Keep releasing , this reminds me of all the .com software during that era where wow we are already at version 3.0 it’s only been 60 Days

aussieguy1234today at 12:28 AM

If SWE-Bench Verified is no longer a good measure of agentic coding abilities, what benchmark now is?

jawigginsyesterday at 7:58 PM

What is the major and minor semver meaning for these models? Is each minor release a new fine-tuning with a new subset of example data while the major releases are made from scratch? Or do they even mean anything at this point?

show 1 reply
elAhmoyesterday at 7:40 PM

Is Codex receiving 5.4 or 5.5 release?

I am still using Codex 5.3 and haven't switched to GPT 5.4 as I don't like the 'its automatic bro trust us', so wondering is Codex going to get these specific releases at all in the future.

journalyesterday at 11:33 PM

does it have cached pricing?

jedisct1yesterday at 9:07 PM

GPT-5.4 is already an incredible model for code reviews and security audits with the swival.dev /audit command.

The fact that GPT-5.5 is apparently even better at long-running tasks is very exciting. I don’t have access to it yet, but I’m really looking forward to trying it.

wslhyesterday at 9:06 PM

Related and insightful: "GPT-5.5: Mythos-Like Hacking, Open to All" [1].

[1] https://news.ycombinator.com/item?id=47879330

ant6nyesterday at 8:41 PM

My impression has been that ChatGPT-5.4 has been getting dumber and more exhausting in the last couple of weeks. Like it makes a lot of obvious mistakes, ignores (parts of) prompts. keeps forgetting important facts or requirement.

Maybe this is a crazy theory, but I sometimes feel like they gimp their existing models before a big release to you'll notice more of a "step".

show 1 reply
varispeedyesterday at 7:19 PM

I am sceptical. The generation after 4o models have become crappier and crappier. Hope this one changes the trend. 5.4 is unusable for complex coding work.

mondojesusyesterday at 6:48 PM

I'm still using 5.3 in codex. Are 5.4 and 5.5 better than 5.3 in concrete ways?

show 1 reply
enraged_camelyesterday at 6:35 PM

Is this the first time OpenAI compared their new release to Anthropic models? Previously they were comparing only to GPT's own previous versions.

k2xlyesterday at 6:24 PM

ARC-AGI 3 is missing on this list - given that the SOTA before 5.5 <1% if I recall, I wonder if this didn't make meaningful progress.

show 1 reply
cmrdporcupineyesterday at 6:13 PM

Not rolled out to my Codex CLI yet, but some users on Reddit claiming it's on theirs.

damnitbuildstoday at 11:43 AM

Woop woop !

Now, after all this time, this must shurely be the release that does all software developers out of a job ?

Or has Dirty Sam being caught lying, again ?

Cos I've still got a programming job, and GPT can't do it for shit.

xnxyesterday at 6:21 PM

Next up: Google I/O on May 19?

I have to imagine they'll go to Gemini 3.5 if only for marketing reasons.

luqtasyesterday at 6:05 PM

they are using ethical training weights this time!!! /j

throwaw12yesterday at 6:47 PM

If anyone tried it already, how do you feel?

Numbers look too good, wondering if it is benchmaxxed or not

i_love_retrosyesterday at 9:00 PM

Oh shiiiiit boy! An incrementation dropped!!

yuvrajmalgatyesterday at 7:21 PM

finally

baxuzyesterday at 8:28 PM

Ah yes, the next "trust me bro"

DrokAItoday at 4:54 AM

[dead]

minhajulmahibtoday at 4:24 AM

[dead]

max2026today at 3:32 AM

[dead]

goldfish_gemma4today at 1:11 AM

[dead]

charliecsyesterday at 6:35 PM

[dead]

hiverrbeyyyesterday at 11:29 PM

[dead]

zhouquanxitoday at 12:53 PM

[dead]

lukebechtelyesterday at 8:44 PM

[dead]

1515874411yesterday at 9:17 PM

[dead]

jeremie_strandyesterday at 6:31 PM

[dead]

yuvrajmalgatyesterday at 7:31 PM

[dead]

marsven_422today at 4:34 AM

[dead]

wiseowiseyesterday at 8:47 PM

> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.

Everybody understands that you need to make money, but can you tone it down with the f*cking FOMO, please? It sounds just pathetic at this point:

'one engineer at NVIDIA', 'limb amputated'

Put the cunt in a room and give me a handsaw, I want to see how fast he'll give up his arm over some cloud model.

MagicMoonlightyesterday at 6:35 PM

Two hundred pages of shilling and it’s a 1% improvement in the benchmarks. They’re dead in the water.

Imagine spending 100m on some of these AI “geniuses” and this is the best they can do.

XCSmeyesterday at 7:37 PM

2x the price for 1-5% performance gain

justonepost2yesterday at 6:39 PM

the attenuation of man nears

< 5 years until humans are buffered out of existence tbh

may the light of potentia spread forth beyond us

codersshyesterday at 6:32 PM

Great modal, I have been using codex and its awesome. Lets see what GPT-5.5 does to it