logoalt Hacker News

fluoridationyesterday at 4:14 PM2 repliesview on HN

It's not. It's narrow even between the CPU and RAM. That's just the way x86 is designed. Nvidia and AMD by contrast have the luxury of being able to rearchitect their single-board computers each generation as long as they honor the PCIe interface.

It is also true that having a 384-bit memory bus shared with the video card would necessitate a redesigned PCIe slot as well as an outrageous number of traces on the motherboard, though.


Replies

adrian_byesterday at 8:28 PM

Traditionally, the width of the GPU memory interfaces was many times greater than that of CPUs.

However the maximum width in consumer GPUs, of up to 1024-bit, has been reached many years ago.

Since then the width of the memory interfaces in consumer GPUs has been decreasing continuously, and this decrease has been only partially compensated by higher memory clock frequencies. This reduction has been driven by NVIDIA, in order to increase their profit margins by reducing the memory cost.

Nowadays, most GPU owners must be content with a memory interface no better than 192-bit, like in RTX 5070, which is only 50% wider than for a desktop CPU and much narrower than for a workstation or server CPU.

The reason why using the main memory in GPUs is slow has nothing to do with the width of the CPU memory interface, but it is caused by the fact that the GPU accesses the main memory through PCIe, so it is limited by the throughput of at most 16 PCIe lanes, which is much lower than that of either the GPU memory interface or the CPU memory interface.

dist-epochyesterday at 4:17 PM

ThreadRipper has 8 memory channels versus 2 for a desktop AMD CPU. It's not an x86 limitation.

show 1 reply