logoalt Hacker News

Andys06/27/20251 replyview on HN

Thanks for sharing that.

Presumably if you'd split the elements into 16 shares (one for each CPU), summed with 16 threads, and then summed the lot at the end, then random would be faster than sorted?


Replies

bee_rider06/27/2025

I don’t think random should be faster than contiguous access, if you parallelize both of them.

Although, it looks like that chip has a 1MB L2 cache for each core. If these are 4 Bytes ints, then I guess they won’t all fit in one core’s L2, but maybe they can all start out in their respective cores’ L2 if it is parallelized (well, depends on how you set it up).

Maybe it will be closer. Contiguous should still win.

show 1 reply