Is AI Agent just LLM wrapper ? Is there anything more interesting to it ?
I hate to say "it depends" but it, aaah, kinda depends. Nailing down a good definition of "Agent" has a been a problem dating back at least into the 1990's if not the 1980's. So, depending on which definition of "AI Agent" you're using, you arguably don't even need any LLM at all. Heck, using the most expansive definition I've seen, a mechanical thermostat counts. I don't know that I'd go that far, but I'll definitely say that I do not consider Agents to require use of LLM's.
That said, the "Agent pattern du jour" is heavily based on using LLM's to provide the "brain" of the Agent and then Tool Calling to let it do things an LLM can't normally do. But still... depending on just what you do with those tool calls and any other code that sits in your Agent implementation then it certainly could be more than "just" an LLM wrapper.
Nothing stops you from, for example, using the BDI architecture, implementing multi-level memory that's analogous to the way human memory works, wiring in some inductive learning, and throwing in some case-based reasoning, and an ontology based reasoning engine.
Most people today aren't doing this, because they're mostly johnny-come-lately's that don't know anything about AI besides what they see on Twitter, Reddit, and LinkedIn; and wouldn't know BDI from BDSM.
[1]: https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93...
Yes. Check the codebases. It's all prompting scaffolding. All of it. Chain of though, agents, tool using. It's just parsing the user inputs, adding text around it. "You are an expert X". That's the whole edifice.
i think an Agent is an LLM that interacts with the outside world via a protocol like MCP, that's a kind of REST-like protocol with a detailed description for the LLM on how to use it. An example is an MCP server that knows how to look up the price for a given stock ticker, so it enables the LLM to tell the current price for that ticker.
see: https://github.com/luigiajah/mcp-stocks
The implementation: https://github.com/luigiajah/mcp-stocks/blob/main/main.py
Each MCP endpoint comes with a detailed comment - that comment will be part of the metadata published by the MCP server / extension. The LLM reads this instruction when the MCP extension is added by the end user, so it will know how to call it.
The main difference between REST an MCP is that MCP can maintain state for the current session (that's an option), while REST is supposed to be inherently stateless.
I think most of the other protocols are a variation of MCP.