> No amount of bloat matches what an LLM needs.
I don't think that's necessarily true. For instance, LinkedIn uses more memory than Gemma E2B inference does.
LinkedIn is an entirely different category and an extreme case at that. We’re not talking about LLM’s replacing LinkedIn either. It’s an entirely different comparison/discussion.
LinkedIn is an entirely different category and an extreme case at that. We’re not talking about LLM’s replacing LinkedIn either. It’s an entirely different comparison/discussion.