We have open-weight LLMs like DeepSeek that prove the cost of running inference with near-frontier models can be very cheap.