The idea is that smarter models might use fewer turns to accomplish the same task - reducing the overall token usage
Though, from my limited testing, the new model is far more token hungry overall
Well you‘ll need the same prompt for input tokens?
Well you‘ll need the same prompt for input tokens?