I've often wondered doing this with extreme compression. What if you did extreme compression + decompression on the GPU? Because you're leaving a lot of compute unused.
I'm not sure, but I suspect that LLM weights don't compress all that well. The intuition here is that training an LLM is compression of the training data into the weights, so they are probably very information dense already. Can't squeeze them down much.
I did it, but with different quantization compressions, It ran into quality issues, I will try to rerun with the same quants if that fixes the issue, but the most that looks unused, its being used by rotating layers that are being swapped by the cpu from the ram itself, that manages to keep layers warm, ready to use while inferencing and discarding already used ones