Am I the only one who feels the comments here don't sound organic at all?
I think it's more people being fascinated by this curious architectural detail. I imagine it's fascinating to people who are not exposed to the intricate details of computer architecture, which I assume is the vast majority here. It's a glimpse into a very odd world (which is your day-to-day work in the HFT field, but they rarely talk about this, and much less in such big words).
TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.
And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.
So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.
You're absolutely right
You're absolutely right to call this out. No humans, no emotion, no real comments - just LLM slop.
In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.
I don’t see anything unusual
No, something is funny here. In the previous submission (https://news.ycombinator.com/item?id=47680023) the only (competently) criticizing comment (by jeffbee) was downvoted into oblivion/flagged.
Thank you I was picking up on that too. Maybe she has fans here or something but the vibe is off.
No I felt the same way, they're exactly like the usual LLM bot comment where a LLM recap ops and ends with an platitude or witty encouragement.
But all the accounts are old/legit so I think that you and me have just become paranoid...