logoalt Hacker News

GPT-5.4

491 pointsby mudkipdevtoday at 6:08 PM448 commentsview on HN

https://openai.com/index/gpt-5-4-thinking-system-card/

https://x.com/OpenAI/status/2029620619743219811


Comments

Philip-J-Frytoday at 9:17 PM

I find it quite funny how this blog post has a big "Ask ChatGPT" box at the bottom. So you might think you could ask a question about the contents of the blog post, so you type the text "summarise this blog post". And it opens a new chat window with the link to the blog post followed by "summarise this blog post". Only to be told "I can't access external URLs directly, but if you can paste the relevant text or describe the content you're interested in from the page, I can help you summarize it. Feel free to share!"

That's hilarious. Does OpenAI even know this doesn't work?

show 9 replies
__jl__today at 8:54 PM

What a model mess!

OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4. There version numbers jump across different model lines with codex at 5.3, what they now call instant also at 5.3.

Anthropic are really the only ones who managed to get this under control: Three models, priced at three different levels. New models are immediately available everywhere.

Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

show 11 replies
minimaxirtoday at 6:15 PM

The marquee feature is obviously the 1M context window, compared to the ~200k other models support with maybe an extra cost for generations beyond >200k tokens. Per the pricing page, there is no additional cost for tokens beyond 200k: https://openai.com/api/pricing/

Also per pricing, GPT-5.4 ($2.50/M input, $15/M output) is much cheaper than Opus 4.6 ($5/M input, $25/M output) and Opus has a penalty for its beta >200k context window.

I am skeptical whether the 1M context window will provide material gains as current Codex/Opus show weaknesses as its context window is mostly full, but we'll see.

Per updated docs (https://developers.openai.com/api/docs/guides/latest-model), it supercedes GPT-5.3-Codex, which is an interesting move.

show 6 replies
creamyhorrortoday at 7:48 PM

I've only used 5.4 for 1 prompt (edit: 3@high now) so far (reasoning: extra high, took really long), and it was to analyse my codebase and write an evaluation on a topic. But I found its writing and analysis thoughtful, precise, and surprisingly clearly written, unlike 5.3-Codex. It feels very lucid and uses human phrasing.

It might be my AGENTS.md requiring clearer, simpler language, but at least 5.4's doing a good job of following the guidelines. 5.3-Codex wasn't so great at simple, clear writing.

Alifatisktoday at 9:21 PM

So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?

We got:

- GPT-5.1

- GPT-5.2 Thinking

- GPT-5.3 (codex)

- GPT-5.3 Instant

- GPT-5.4 Thinking

- GPT-5.4 Pro

Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

The good news here is the support for 1M context window, finally it has caught up to Gemini.

show 2 replies
kgeisttoday at 8:46 PM

>Today, we’re releasing <..> GPT‑5.3 Instant

>Today, we’re releasing GPT‑5.4 in ChatGPT (as GPT‑5.4 Thinking),

>Note that there is not a model named GPT‑5.3 Thinking

They held out for eight months without a confusing numbering scheme :)

show 3 replies
gavinraytoday at 6:45 PM

The "RPG Game" example on the blogpost is one of the most impressive demo's of autonomous engineering I've seen.

It's very similar to "Battle Brothers", and the fact that RPG games require art assets, AI for enemy moves, and a host of other logical systems makes it all the more impressive.

show 1 reply
Chance-Devicetoday at 6:24 PM

I’m sure the military and security services will enjoy it.

show 3 replies
mattastoday at 6:15 PM

"GPT‑5.4 interprets screenshots of a browser interface and interacts with UI elements through coordinate-based clicking to send emails and schedule a calendar event."

They show an example of 5.4 clicking around in Gmail to send an email.

I still think this is the wrong interface to be interacting with the internet. Why not use Gmail APIs? No need to do any screenshot interpretation or coordinate-based clicking.

show 14 replies
smoody07today at 8:11 PM

Surprised to see every chart limited to comparisons against other OpenAI models. What does the industry comparison look like?

show 3 replies
egonschieletoday at 6:30 PM

The actual card is here https://deploymentsafety.openai.com/gpt-5-4-thinking/introdu... the link currently goes to the announcement.

show 1 reply
yanis_ttoday at 6:42 PM

These releases are lacking something. Yes, they optimised for benchmarks, but it’s just not all that impressive anymore. It is time for a product, not for a marginally improved model.

show 10 replies
prydttoday at 6:43 PM

I no longer want to support OpenAI at all. Regardless of benchmarks or real world performance.

show 2 replies
zone411today at 9:06 PM

Results from my Extended NYT Connections benchmark:

GPT-5.4 extra high scores 94.0 (GPT-5.2 extra high scored 88.6).

GPT-5.4 medium scores 92.0 (GPT-5.2 medium scored 71.4).

GPT-5.4 no reasoning scores 32.8 (GPT-5.2 no reasoning scored 28.1).

consumer451today at 9:09 PM

I am very curious about this:

> Theme park simulation game made with GPT‑5.4 from a single lightly specified prompt, using Playwright Interactive for browser playtesting and image generation for the isometric asset set.

Is "Playwright Interactive" a skill that takes screenshots in a tight loop with code changes, or is there more to it?

nickysielickitoday at 6:31 PM

can anyone compare the $200/mo codex usage limits with the $200/mo claude usage limits? It’s extremely difficult to get a feel for whether switching between the two is going to result in hitting limits more or less often, and it’s difficult to find discussion online about this.

In practice, if I buy $200/mo codex, can I basically run 3 codex instances simultaneously in tmux, like I can with claude code pro max, all day every day, without hitting limits?

show 7 replies
twtw99today at 6:24 PM

If you don't want to click in, easy comparison with other 2 frontier models - https://x.com/OpenAI/status/2029620619743219811?s=20

show 7 replies
denysvitalitoday at 6:15 PM

Article: https://openai.com/index/introducing-gpt-5-4/

gpt-5.4

Input: $2.50 /M tokens

Cached: $0.25 /M tokens

Output: $15 /M tokens

---

gpt-5.4-pro

Input: $30 /M tokens

Output: $180 /M tokens

Wtf

show 2 replies
senkotoday at 9:33 PM

Just tested it with my version of the pelican test: a minimal RTS game implementation (zero-shot in codex cli): https://gist.github.com/senko/596a657b4c0bfd5c8d08f44e4e5347... (you'll have to download and open the file, sadly GitHub refuses to serve it with the correct content type)

This is on the edge of what the frontier models can do. For 5.4, the result is better than 5.3-Codex and Opus 4.6. (Edit: nowhere near the RPG game from their blog post, which was presumably much more specced out and used better engineering setup).

I also tested it with a non-trivial task I had to do on an existing legacy codebase, and it breezed through a task that Claude Code with Opus 4.6 was struggling with.

I don't know when Anthropic will fire back with their own update, but until then I'll spend a bit more time with Codex CLI and GPT 5.4.

timperatoday at 6:42 PM

> Steerability: Similarly to how Codex outlines its approach when it starts working, GPT‑5.4 Thinking in ChatGPT will now outline its work with a preamble for longer, more complex queries. You can also add instructions or adjust its direction mid-response.

This was definitely missing before, and a frustrating difference when switching between ChatGPT and Codex. Great addition.

motbus3today at 8:07 PM

Sam Altman can keep his model intentionally to himself. Not doing business with mass murderers

jryiotoday at 6:25 PM

1 million tokens is great until you notice the long context scores fall off a cliff past 256K and the rest is basically vibes and auto compacting.

hmokiguesstoday at 8:46 PM

They hired the dude from OpenClaw, they had Jony Ive for a while now, give us something different!

daft_pinktoday at 8:23 PM

I’ve officially got model fatigue. I don’t care anymore.

show 2 replies
atkradtoday at 10:27 PM

What is the main difference between this version with the previous one?

butILoveLifetoday at 9:15 PM

Anyone else completely not interested? Since GPT5, its been cost cutting measure after cost cutting measure.

I imagine they added a feature or two, and the router will continue to give people 70B parameter-like responses when they dont ask for math or coding questions.

rbitartoday at 7:02 PM

I think the most exciting change announced here is the use of tool search to dynamically load tools as needed: https://developers.openai.com/api/docs/guides/tools-tool-sea...

ZeroCool2utoday at 6:27 PM

Bit concerning that we see in some cases significantly worse results when enabling thinking. Especially for Math, but also in the browser agent benchmark.

Not sure if this is more concerning for the test time compute paradigm or the underlying model itself.

Maybe I'm misunderstanding something though? I'm assuming 5.4 and 5.4 Thinking are the same underlying model and that's not just marketing.

show 4 replies
nickandbrotoday at 6:52 PM

Beat Simon Willison ;)

https://www.svgviewer.dev/s/gAa69yQd

Not the best pelican compared to gemini 3.1 pro, but I am sure with coding or excel does remarkably better given those are part of its measured benchmarks.

show 1 reply
quotemstrtoday at 10:38 PM

GPT 5.4 is one of the most censored models out there.

https://speechmap.ai/models/openai-gpt-5-4

It completes only 29% of controversial requests. It refuses to discuss numerous subjects rooted in facts or that reflect views of significant portions of the population. It refuses to even write a short essay on exactly what, say, Herasight-style generic screening or putting weapons in space. Agree or disagree, reasonable people can have a range of views of these subjects and it is not the place of OpenAI or any lab to determine for everyone the right answers to open societal questions.

Shame on them for this.

dandieptoday at 6:56 PM

Anyone know why OpenAI hasn't released a new model for fine tuning since 4.1? It'll be a year next month since their last model update for fine tuning.

show 2 replies
bazmattaztoday at 6:54 PM

Anyone else feel that it’s exhausting keeping up with the pace of new model releases. I swear every other week there’s a new release!

show 4 replies
jcmontxtoday at 6:45 PM

5.4 vs 5.3-Codex? Which one is better for coding?

show 5 replies
Aldipowertoday at 9:34 PM

So did they raised the ridiculous small "per tool call token limit" when working with MCP servers? This makes Chat useless... I do not care, but my users.

paxystoday at 6:57 PM

"Here's a brand new state-of-the-art model. It costs 10x more than the previous one because it's just so good. But don't worry, if you don't want all this power you can continue to use the older one."

A couple months later:

"We are deprecating the older model."

show 1 reply
XCSmetoday at 8:03 PM

Seems to be quite similar to 5.3-codex, but somehow almost 2x more expensive: https://aibenchy.com/compare/openai-gpt-5-4-medium/openai-gp...

jstummbilligtoday at 8:11 PM

Inline poll: What reasoning levels do you work with?

This becomes increasingly less clear to me, because the more interesting work will be the agent going off for 30mins+ on high / extra high (it's mostly one of the two), and that's a long time to wait and an unfeasible amount of code to a/b

smusamashahtoday at 9:03 PM

I only want to see how it performs on the Bullshit-benchmark https://petergpt.github.io/bullshit-benchmark/viewer/index.v...

GPT is not even close yo Claude in terms of responding to BS.

brcmthrowawaytoday at 10:28 PM

How much of LLM improvement comes from regular ChatGPT usage these days?

alpinemantoday at 7:07 PM

No thanks. Already cancelled my sub.

7777777philtoday at 7:01 PM

83% win rate over industry professionals across 44 occupations.

I'd believe it on those specific tasks. Near-universal adoption in software still hasn't moved DORA metrics. The model gets better every release. The output doesn't follow. Just had a closer look on those productivity metrics this week: https://philippdubach.com/posts/93-of-developers-use-ai-codi...

show 2 replies
OsrsNeedsf2Ptoday at 7:11 PM

Does anyone know what website is the "Isometric Park Builder" shown off here?

strongpigeontoday at 6:32 PM

It's interesting that they charge more for the > 200k token window, but the benchmark score seems to go down significantly past that. That's judging from the Long Context benchmark score they posted, but perhaps I'm misunderstanding what that implies.

show 2 replies
bob1029today at 8:17 PM

I was just testing this with my unity automation tool and the performance uplift from 5.2 seems to be substantial.

cjtoday at 6:38 PM

I use ChatGPT primarily for health related prompts. Looking at bloodwork, playing doctor for diagnosing minor aches/pains from weightlifting, etc.

Interesting, the "Health" category seems to report worse performance compared to 5.2.

show 2 replies
iamronaldotoday at 6:25 PM

Notably 75% on os world surpassing humans at 72%... (How well models use operating systems)

motzatoday at 9:12 PM

No doubt this was released early to ease the bad press

melbourne_mattoday at 9:47 PM

Quick: let's release something new that gives the appearance that we're still relevant

swingboytoday at 6:46 PM

Even with the 1m context window, it looks like these models drop off significantly at about 256k. Hopefully improving that is a high priority for 2026.

🔗 View 36 more comments