I don't think people realize how irrational that argument is (that SOTA is better, so you have to use SOTA).
Open weights will always trail SOTA. Forever. So let's say they continue to get better every year. In 100 years, the open weight model will be 100x better than today. But the SOTA model will be 101x better. And still, people will make this argument that you should pay a premium for SOTA. Despite the open weights being 100x better than what we have today.
The open weights today are better than the SOTA models from a year ago. Yet people were using the SOTA models for coding a year ago. If people used SOTA models a year ago, then it was good enough, right? So why isn't the same (or better) good enough now?
The answer is: it is good enough. But people are irrationally afraid of missing out (FOMO). They're not really using their brains. They're letting fear lead their decisions. They're afraid "something bad" will happen if they don't use the absolute latest model. Despite the repeatable, objective benchmarks telling us all that open weights are perfectly capable of doing real work today, the fear is that we're missing out on something better. So people throw away their money and struggle with rate-limits because of their fear.