There's no magic bullet for inference on cheap accelerators. Any accelerator will still require large amounts of high bandwidth memory.
Strictly speaking there is that one startup that compiles entire models into huge ASIC. With trade off that entire hardware becomes outdated when new model version is released in 2-3 months.
The way to do it _today_ requires enormous amounts of HBM! However, we've never designed inference accelerators, which is actually a quite "trivial" problem, but we've just never had a need.
Groq (acqui-hired by NVidia) came up with a different processor architecture: metric shit-tons of SRAM attached to a modest single core deterministic processor. No HBM needed on this card, and 32x faster inference than today's best GPUs at inference!
These LPUs are pretty useless for training though, which is useful for companies training models! Training is expensive, inference is cheap (someday, not now).
There's also a Canadian company that _literally burned the model as a silicon mask_ on a chip. It's unbelievably (1000x) fast, but not flexible of course: https://chatjimmy.ai