logoalt Hacker News

EugeneOZyesterday at 5:56 PM2 repliesview on HN

Not in my experience. Quoting my tweet:

Gave the same prompt to GPT 5.4 (high) and Opus 4.6 (high).

GPT 5.4 implemented the feature, refactored the code (was not asked to), removed comments that were not added in that session, made the code less readable, and introduced a bug. "Undo All".

Opus 4.6 correctly recognized that the feature is already implemented in the current code (yeah, lol) and proposed implementing tests and updating the docs.

Opus 4.6 is still the best coding agent.

So yeah, GPT 5.4 (high) didn't even check if the feature was already implemented.

Tried other tasks, tried "medium" reasoning - disappointment.


Replies

frdeyesterday at 10:48 PM

Is this sample size of one task, or a consistent finding across many tasks?

hirvi74yesterday at 6:36 PM

I make ChatGPT and Claude code review each other's outputs. ChatGPT thinks its solutions are better than what Claude produces. What was more surprising to me is that Claude, more often than not, prefers ChatGPT's responses too.

I am to sure one can really extrapolate much out of that, but I do find it interesting nonetheless.

I think language is also an important factor. I have a hard time deciding which of the two LLMs is worse at Swift, for example. They both seem equally great and awful in different ways.

show 1 reply