Dumb questions, from someone not in the field...
What is a distilled model?
Why doesn't Google do this (to make their models smaller)?
Seems like you could make a competitor to Gemini?
No question is stupid!
1. Distilled means taking the intelligence of a big model and compacting into a tiny model.
2. Google already does so with FunctionGemma, but Needle argues that better performance could be achieved with 10x smaller model using our technologies.
Model distillation is lossy compression of big model to produce a smaller model.
Smaller model requires less space on disk, less video memory, and less compute (cheaper hardware).
Downside is that distilled model performs worse on the same benchmarks compared to original model.
There are two answers already and neither is entirely adequate.
In normal LLM training, you take a set of documents and have it learn to predict the future, then have some private RLHF/RLVR etc. data that it learns to produce good chat outputs from.
In distillation, you take a set of prompts you are interested in, and record the big LLM's outputs, then train your small model to produce the same output as the big LLM.
This has a few advantages - you can get performance much more quickly on your documents/prompts of interest, with a much cheaper training budget, and you don't have to worry about acquiring very expensive RLHF/RLVR training data.
A lot of the very good Chinese LLMs got very good very quickly through distillation from frontier models, which is why Anthropic/Google/OpenAI are blocking it so aggressively.