logoalt Hacker News

arcanusyesterday at 9:50 AM1 replyview on HN

Hopper had 60 TF FP64, Blackwell has 45 TF, and Rubin has 33 TF.

It is pretty clear that Nvidia is sunsetting FP64 support, and they are selling a story that no serious computational scientist I know believes, namely that you can use low precision operations to emulate higher precision.

See for example, https://www.theregister.com/2026/01/18/nvidia_fp64_emulation...

It seems the emulation approach is slower, has more errors, and doesn't apply to FP64 vector, only matrix operations.


Replies

dgacmuyesterday at 2:58 PM

This is kind of amazing - I still have a bunch of Titan V's (2017-2018) that do 7 TF FP64. 8 years old and managing 1/4 of what Rubin does, and the numbers are probably closer if you divide by the power draw.

(Needless to say, the FP32 / int8 / etc. numbers are rather different.)