logoalt Hacker News

zozbot234yesterday at 5:49 PM4 repliesview on HN

I'm not quite seeing the real benefit of this. Is the idea that warps will now be able to do work-stealing and continuation-stealing when running heterogenous parallel workloads? But that requires keeping the async function's state in GPU-wide shared memory, which is generally a scarce resource.


Replies

nxobjectyesterday at 7:57 PM

God, as someone who took their elective on graphics program when GPGPU and computer shaders first became a thing, reading this makes me realize I definitely need an update on what modern GPU uarchs are like now.

Re: heterogenous workload: I'm told by a friend in HPC that the old advice about avoiding diverging branches within warps is no longer much of an issue – is that true?

show 1 reply
LegNeatoyesterday at 5:57 PM

Yes, that's the idea.

GPU-wide memory is not quite as scarce on datacenter cards or systems with unified memory. One could also have local executors with local futures that are `!Send` and place in a faster address space.

pjmlpyesterday at 7:45 PM

This is already happening in C++, NVidia is the one pushing the senders/receivers proposal, which is one of the possible co-routine runtimes to be added into C++ standard library.

jmalickiyesterday at 7:13 PM

A ton of GPU workloads require leaving large amounts of RAM resident on the GPU and running computation with some new data from the CPU.