logoalt Hacker News

commandarlast Thursday at 7:01 PM6 repliesview on HN

In particular, the API pricing for GPT-5.2 Pro has me wondering what on earth the possible market for that model is beyond getting to claim a couple of percent higher benchmark performance in press releases.

>Input:

>$21.00 / 1M tokens

>Output:

>$168.00 / 1M tokens

That's the most "don't use this" pricing I've seen on a model.

https://openai.com/api/pricing/


Replies

aimanbenbahalast Thursday at 7:22 PM

Last year o3 high did 88% on ARC-AGI 1 at more than $4,000/task. This model at its X high configuration scores 90.5% at just $11,64 per task.

General intelligence has ridiculously gotten less expensive. I don't know if it's because of compute and energy abundance,or attention mechanisms improving in efficiency or both but we have to acknowledge the bigger picture and relative prices.

show 1 reply
asgrahamlast Thursday at 7:13 PM

Those prices seem geared toward people who are completely price insensitive, who just want "the best" at any cost. If the margins on that premium model are as high as they should be, it's a smart business move to give them what they want.

arthurcollelast Thursday at 7:09 PM

gpt-4-32k pricing was originally $60.00 / $120.00.

wahnfriedenlast Thursday at 7:17 PM

Pro solves many problems for me on first try that the other 5.1 models are unable to after many iterations. I don't pay API pricing but if I could afford it I would in some cases for the much higher context window it affords when a problem calls for it. I'd rather spend some tens of dollars to solve a problem than grind at it for hours.

reactordevlast Thursday at 7:08 PM

Less an issue if your company is paying

show 1 reply
Leynoslast Thursday at 7:13 PM

Someone on Reddit reported that they were charged $17 for one prompt on 5-pro. Which suggests around 125000 reasoning tokens.

Makes me feel guilty for spamming pro with any random question I have multiple times a day.