logoalt Hacker News

zmmmmmtoday at 1:26 AM0 repliesview on HN

It's one of the things I really dislike about providers hyping "inference time scaling" as a concept. Apart from being a blatant misnomer (there's nothing scalable about it), it's so transparently a dial they can manipulate to shape perception. If they want a model to seem more intelligent than it really is, just dial up the "thinking" and burn tokens. Then once you have people fooled, you can dial it down again. Everyone will assume its their own fault that their AI suddenly isn't working properly. And since it's almost entirely unmeasurable you can do it selectively for any given product you want to pitch for any period of time you like and then pull the rug.

We need to force them back into being providers of commodity services and hit this assumption they can mold things in real time on the head.