How feasible would it be to integrate a neural video codec into the SoC/GPU silicon?
There would be model size constraints and what quality they can achieve under those constraints.
Would be interesting if it didn't make sense to develop traditional video codecs anymore.
The current video<->latents networks (part of the generative AI model for video) don't optimize just for compression. And you probably wouldn't want variable size input in an actual video codec anyway.