logoalt Hacker News

pjmlpyesterday at 9:54 AM0 repliesview on HN

Additional points, CUDA is polyglot, and some people do care about writing their kernels in something else other than C++, C or Fortran, without going through code generation.

NVidia is acknowledging Python adoption, with cuTile and MLIR support for Python, allowing the same flexibility as C++, using Python directly even for kernels.

They seem to be supportive of having similar capabilities for Julia as well.

The IDE and graphical debuggers integration, the libraries ecosystem, which now are also having Python variants.

As someone that only follows GPGPU on the side, due to my interests in graphics programming, it is hard to understand how AMD and Intel keep failing to understand what CUDA, the whole ecosystem, is actually about.

Like, just take the schedule of a random GTC conference, how much of it can I reproduce on oneAPI or ROCm as of today.