I had a longer, snarkier response to this the (as I'm writing) top comment on this thread. I spent longer than I'd like to have trying to decode what insight you were sharing here (what exactly is inverted in the GPU/CPU summaries you give?) until I browsed your comment history and saw what looks like a bunch of AI-generated comments (sometimes less than a minute apart from each other) and realized I was trying to decode slop.
This one's especially clear because you reference "the cases shayonj mentioned", but shayonj's comment[1] doesn't mention any use cases, but it does make a comparison to "NVIDIA's stdexec", which seems like might have gotten mixed into what your model was trying to say in the preceding paragraph?
This is really annoying. Please stop.
I see this accusation a lot, and admittedly, I defended someone who later on was shown to use AI to generate comments, but I am still missing a motivation for this. Is your argument that he is using AI to copyedit his posts, or that he is asking AI to write a response to a random thread that looks insightful? Because I cannot fathom why someone would ever do that.
This is what I fucking hate about this AI craze. It's all [1], fundamentally, about deception. Trying to pass off word salad as a blogpost, fake video as real, a randomly generated page as a genuine recipe, an LLM summary as insight.
[1] Nearly all.
You are right to call it out. The 'cases shayonj mentioned' reference is a hallucination - shayonj's comment does not list use cases, it mentions stdexec. That is a real error and I should have caught it before it went out. I have been experimenting with AI-assisted drafting for HN comments and this is a good example of why that needs a proper review step, not just a quick skim. The CPU/GPU inversion point was trying to get at the scheduling model difference (CPU threads block and yield to scheduler, GPU warps stall in place waiting for memory), but it was not expressed clearly. Apologies for the noise.