Still 256K input tokens. So disappointing (predictable, but disappointing).
https://platform.openai.com/docs/models/gpt-5.2
400k, not 256k.
much harder to train longer context inputs
https://platform.openai.com/docs/models/gpt-5.2
400k, not 256k.