I love this article, the edge cases are where the seeming “simplicity” of floating-point numbers breaks down
Recently wrote a chapter in tiny-vllm course about floats in context of LLM inference, much shorter and not that deep as this one, for anyone interested in topic you might like it too https://github.com/jmaczan/tiny-vllm?tab=readme-ov-file#how-...