Unfortunately the article mentions nowhere why a GPU would ever need to compress (rather than decompress) images. What's the application for that? In the beginning the article mentions formats for computer game textures, but I'm pretty sure those already ship compressed, and they only need to be decompressed by the client GPUs.
Someone mentioned environment maps. Anything that's done with framebuffers or render-to-texture might benefit. e.g. Water reflections and refractions, metal surfaces reflecting the world, mirrors in bathrooms, panini distortion for high-FOV cameras, TV screens like the Breencasts in Half-Life 2
There are many textures that can't be encoded in advance: images compressed with transmission formats such as jpeg or avif, procedural textures, terrain splatting, user generated textures, environment maps, dynamic lightmaps, etc.