logoalt Hacker News

TorchLean: Formalizing Neural Networks in Lean

59 pointsby matt_dlast Sunday at 3:03 AM8 commentsview on HN

Comments

measurablefunctoday at 5:28 PM

I guess the next step would be adding support for quantized arithmetic.

show 1 reply
westurnertoday at 7:22 PM

How could this lend insight into why Fast Fourier Transform approximates self-attention?

> Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.

[1] "Fnet: Mixing tokens with fourier transforms" (2021) https://arxiv.org/abs/2105.03824 .. "Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs" https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...

"Why formalize mathematics – more than catching errors" (2025) https://news.ycombinator.com/item?id=45695541

Can the QFT Quantum Fourier Transform (and IQFT Inverse Quantum Fourier Transform) also be substituted for self-attention in LLMs, and do Lean formalisms provide any insight into how or why?

show 2 replies