logoalt Hacker News

awnihannunlast Saturday at 12:33 AM1 replyview on HN

Right, my comment was mostly about decoding speed. For prefill you can get a speed up but there you are less latency bound.

In our benchmarks with MLX / mlx-lm it's as much as 3.5x for token generation (decoding) at batch size 1 over 4 machines. In that case you are memory bandwidth bound so sharding the model and KV cache 4-ways means each machine only needs to access 1/4th as much memory.


Replies

liuliulast Saturday at 1:20 AM

Oh! That's great to hear. Congrats! Now, I want to get the all-to-all primitives ready in s4nnc...