logoalt Hacker News

storystarlingyesterday at 8:24 PM3 repliesview on HN

The closed nature is one thing, but the opaque billing on reasoning tokens is the real dealbreaker for integration. If you are bootstrapping a service, I don't see how you can model your margins when the API decides arbitrarily how long to think and bill for a prompt. It makes unit economics impossible to predict.


Replies

TobTobXXyesterday at 9:17 PM

Doesn't ClosedAI do the same? Thinking models bill tokens, but the thinking steps are encrypted.

show 1 reply
czltoday at 2:08 AM

FYI: Newer LLM hosting APIs offer control over amount of "thinking" (as well as length of reply) -- some by token count others by an enum (high low, medium, etc.).

zozbot234yesterday at 8:56 PM

You just have to plan for the worst case.