According to benchmarks in the announcement, healthily ahead of Claude 4.6. I guess they didn't test ChatGPT 5.3 though.
Google has definitely been pulling ahead in AI over the last few months. I've been using Gemini and finding it's better than the other models (especially for biology where it doesn't refuse to answer harmless questions).
The general purpose ChatGpt 5.3 hasn’t been released yet, just 5.3-codex.
It's ahead in raw power but not in function. Like it's got the worlds fast engine but one gear! Trouble is some benchmarks only measure horse power.
> especially for biology where it doesn't refuse to answer harmless questions
Usually, when you decrease false positive rates, you increase false negative rates.
Maybe this doesn't matter for models at their current capabilities, but if you believe that AGI is imminent, a bit of conservatism seems responsible.
I gather that 4.6 strengths are in long context agentic workflows? At least over Gemini 3 pro preview, opus 4.6 seems to have a lot of advantages
Google models and CLI harness feels behind in agentic coding compared OpenAI and Antrophic
The comparison should be with GPT 5.2 pro which has been used successfully to solve open math problems.
Google is way ahead in visual AI and world modelling. They're lagging hard in agentic AI and autonomous behavior.