FWIW, Rust’s memory model is more or less completely identical to C++’s, by design. Atomics work the same, there’s provenance, and so on.
Whether it is a convenient language for GPU programming probably remains to be seen, but I definitely wouldn’t be surprised if you could make a decent DSL-like API for writing safe code that leverages the full spectrum of GPU oddities. That’s what CUDA is, right?
Originally CUDA hardware was designed without a specific memory model, after C++11, NVidia went into a multi year effort to redesign the hardware to match C++ memory model semantics.
CppCon 2017: "Designing (New) C++ Hardware”
https://www.youtube.com/watch?v=86seb-iZCnI