> Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do this - wait for it - by emitting tokens. Which are then parsed into a function call.
You’re just confusing a harness around an LLM for something more. And the core, the LLM takes input tokens and outputs the most likely next tokens. Those tokens might be interpreted into a tool call or anything else, but it’s still just token prediction.
If you disagree, explain what the actual difference is. I claim that LLMs “use” tools by emitting tokens which are taken and passed to a tool call. If you disagree, how?