An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent.
Ok, so agent swarm = tool calling where the tool is a LLM call and the argument is the prompt
Ok, so agent swarm = tool calling where the tool is a LLM call and the argument is the prompt