It's _relatively_ democratic when compared to these counterfactual gatekeeping scenarios:
- What if these centralized providers had restricted their LLMs to a small set of corporations / nations / qualified individuals?
- What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?
- What if the universities / corporations, who had worked on concepts like the attention mechanism so essential for Google's paper, had instead gatekept it to themselves?
- What if the base models, recipes, datasets, and frameworks for training our own LLMs had never been open-sourced and published by Meta/Alibaba/DeepSeek/Mistral/many more?
> - What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?
I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before "Attention is all you need" paper, the novel thing they did was combining existing knowhow into a new idea and making it public. A couple of ingredients of the base research for this is decades old (interestingly back then some European universities were leading the field)