> The idea is to have a chip with SRAM large enough to fit the entire model, so inference can happen entirely in-memory. [...] So how much internal memory does the latest Cerebras chip have? 44GB. This puts OpenAI in kind of an awkward position. 44GB is enough to fit a small model (~20B params at fp16, ~40B params at int8 quantization), but clearly not enough to fit GPT-5.3-Codex.
You don't really need to fit the entire model on a single chip. Just as with GPUs, you can shard the model across multiple chips. Of course when you have a long pipeline of chips that each token needs to pass through, that decreases the end-to-end tokens per second correspondingly.
So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target. Given that Cerebras offers models much larger than 40B at faster speeds https://www.cerebras.ai/pricing#exploration GPT-5.3-Codex-Spark is likely closer to GLM 4.7 in size. (≈355B total parameters, 32B active)
> Given that Cerebras offers models much larger than 40B at faster speeds
This fact really should have given the author pause. It’s hard to take his any of his claims seriously in its face.
> Of course when you have a long pipeline of chips that each token needs to pass through, that decreases the end-to-end tokens per second correspondingly.
No, it only increases the latency, and does not affect the throughput.
> So the size of GPT-5.3-Codex-Spark isn't limited by the memory of a single Cerebras chip, but the number of such chips that you can chain together and still hit the 1000 tokens per second target.
Chaining chips does not decrease token throughput. In theory, you could run models of any size on Cerebras chips. See for example Groq's (not to be confused with Grok) chips, which only have 230 MB SRAM, yet manage to run Kimi K2.
Sharding the model is really slow. The point of building a wafer-scale chip is memory bandwidth for on-chip transfer is far more than you would get from even using chiplets with an interposer/high-bandwidth connection, let alone going off-chip. You're giving up your whole advantage, especially since Cerebras clearly isn't trying to maximize total throughput per watt - Groq, TPUs, and even the latest nVidia solutions are preferable there.