logoalt Hacker News

D-Machinetoday at 8:06 PM1 replyview on HN

If you read the section "Richer attention mechanisms", you can see, no, the mechanism is not generally useable (it requires significant modification to become differentiable). They later speculate:

    While we do not yet know whether exact softmax attention
    can be maintained with the same efficiency, it is easy to
    approximate it with k-sparse softmax attention: retrieve
    the top-k keys and perform the softmax only over those
but if you have played around with training models that use e.g. topk or other hard thresholding operations in e.g. PyTorch (or just think about how many gradients become zero with such an operation) you know that these tend to work only in extremely limited / specific cases, and make training even more finicky than it already is.

Replies

bee_ridertoday at 8:42 PM

I saw that, but the image included nearby made it look like it might be plausible to replace the 1D line around their points with a pretty narrow 2D area. This could still be a somewhat effective filter, right?

show 1 reply