A fun experiment but I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
The cost of communicating information through space is dealt with in fundamentally different ways here. On the CPU it is addressed directly. The actual latency is minimized as much as possible, usually by predicting the future in various ways and keeping the spatial extent of each device (core complex) as small as possible. The GPU hides latency with massive parallelism. That's why we can put them across relatively slow networks and still see excellent performance.
Latency hiding cannot deal well in workloads that are branchy and serialized because you can only have one logical thread throughout. The CPU dominates this area because it doesn't cheat. It directly targets the objective. Making efficient, accurate control flow decisions tends to be more valuable than being able to process data in large volumes. It just happens that there are a few exceptions to this rule that are incredibly popular.
I see us not getting rid of CPU, but CPU and GPU being eventually consolidated in one system of heterogeneous computing units.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU.
How do you class systems like the PS5 that have an APU plugged into GDDR instead of regular RAM? The primary remaining issue is the limited memory capacity.
I wonder if we might see a system with GPU class HBM on the package in lieu of VRAM coupled with regular RAM on the board for the CPU portion?
Mainframes still exist, so CPU isnt going anywhere. Too useful of a tool
I don't think we get rid of the CPU. But the relationship will be inverted. Instead of the CPU calling the GPU, it might be that the GPU becomes the central controller and builds programs and calls the CPU to execute tasks.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
This sentiment is not a recent thing. Ever since GPGPU became a thing, there have been people who first hear about it, don't understand processor architectures and get excited about GPUs magically making everything faster.
I vividly recall a discussion with some management type back in 2011, who was gushing about getting PHP to run on the new Nvidia Teslas, how amazingly fast websites will be!
Similar discussions also spring up around FPGAs again and again.
The more recent change in sentiment is a different one: the "graphics" origin of GPUs seem to have been lost to history. I have met people (plural) in recent years who thought (surprisingly long into the conversation) that I mean stable diffusion when talking about rendering pictures on a GPU.
Nowadays, the 'G' in GPU probably stands for GPGPU.