logoalt Hacker News

ipsum2yesterday at 6:50 PM7 repliesview on HN

The model was released less than an hour ago, and somehow you've been able to form such a strong opinion about it. Impressive!


Replies

Sohcahtoa82today at 12:35 AM

GP said "It is time for a product, not for a marginally improved model."

ChatGPT is still just that: Chat.

Meanwhile, Anthropic offers a desktop app with plugins that easily extend the data Claude has access to. Connect it to Confluence, Jira, and Outlook, and it'll tell you what your top priorities are for the day, or write a Powerpoint. Add Github and it can reason about your code and create a design document on Confluence.

OpenAI doesn't have a product the way Anthropic does. ChatGPT might have a great model, but it's not nearly as useful.

satvikpendemyesterday at 7:33 PM

It's more hedonic adaptation, people just aren't as impressed by incremental changes anymore over big leaps. It's the same as another thread yesterday where someone said the new MacBook with the latest processor doesn't excite them anymore, and it's because for most people, most models are good enough and now it's all about applications.

https://news.ycombinator.com/item?id=47232453#47232735

show 2 replies
earth2marsyesterday at 7:15 PM

I am actually super impressed with Codex-5.3 extra high reasoning. Its a drop in replacement (infact better than Claude Opus 4.6. lately claude being super verbose going in circles in getting things resolved). I stopped using claude mostly and having a blast with Codex 5.3. looking forward to 5.4 in codex.

show 1 reply
cjyesterday at 6:53 PM

One opinion you can form in under an hour is... why are they using GPT-4o to rate the bias of new models?

> assess harmful stereotypes by grading differences in how a model responds

> Responses are rated for harmful differences in stereotypes using GPT-4o, whose ratings were shown to be consistent with human ratings

Are we seriously using old models to rate new models?

show 2 replies
utopiahyesterday at 6:57 PM

Benchmarks?

I don't use OpenAI nor even LLMs (despite having tried https://fabien.benetou.fr/Content/SelfHostingArtificialIntel... a lot of models) but I imagine if I did I would keep failed prompts (can just be a basic "last prompt failed" then export) then whenever a new model comes around I'd throw at 5 it random of MY fails (not benchmarks from others, those will come too anyway) and see if it's better, same, worst, for My use cases in minutes.

If it's "better" (whatever my criteria might be) I'd also throw back some of my useful prompts to avoid regression.

Really doesn't seem complicated nor taking much time to forge a realistic opinion.

kranke155yesterday at 8:05 PM

The models are so good that incremental improvements are not super impressive. We literally would benefit more from maybe sending 50% of model spending into spending on implementation into the services and industrial economy. We literally are lagging in implementation, specialised tools, and hooks so we can connect everything to agents. I think.