non synthetic pre-training text got exhausted long time ago, now focus is more on quality and rl/post-training.
cost will keep going down and more powerful chips will be available, like it always was.
reinforcement learning doesn't have fixed ceiling, advancements in software will keep happening as well.
things like distilling smaller models will likely become free – ie. through speculative decoding which speeds up larger models (incentive to run during inference) where you have free access to strong distillation (large model's logits are calculated, ie. it's free for pickup for distillation) etc.
> cost will keep going down and more powerful chips will be available, like it always was.
cost is not a monotonous function.