logoalt Hacker News

Someone06/27/20251 replyview on HN

They also do not make use of the fact that matrix multiplication is associative, so M⁴ = M² × M², M⁸ = M⁴ × M⁴, etc. That means one can compute F32 using 5 matrix multiplications. This uses 31.

It wouldn’t surprise me at all if doing those 5 matrix multiplications on a CPU were faster than doing 31 on the GPU.

Also, FTA: “To calculate extremly large fibonacci numbers we need to avoid integer overflow. A simple way to avoid that is to just take everything modulo a given number in the matrix multiplication“

⇒ This also is NOT calculating the Fibonacci numbers, copping out when things get difficult on a GPU.

If you want to compute Fn mod some constant for 1 ≤ n ≤ some large constant fast on a GPU, I think I would compute quite a few of the intermediate matrices using the fast way, then spin up some ‘threads’ to compute the intermediate versions by copying those with the ‘standard’ matrix.


Replies

KeplerBoy06/27/2025

The matrix in question is tiny (2x2). There's no hope a GPU would ever outperform a CPU on that.

It only gets interesting if you need a large matmuls or millions of small matmuls in parallel.