logoalt Hacker News

mindcrimetoday at 5:55 AM1 replyview on HN

I hate to say "it depends" but it, aaah, kinda depends. Nailing down a good definition of "Agent" has a been a problem dating back at least into the 1990's if not the 1980's. So, depending on which definition of "AI Agent" you're using, you arguably don't even need any LLM at all. Heck, using the most expansive definition I've seen, a mechanical thermostat counts. I don't know that I'd go that far, but I'll definitely say that I do not consider Agents to require use of LLM's.

That said, the "Agent pattern du jour" is heavily based on using LLM's to provide the "brain" of the Agent and then Tool Calling to let it do things an LLM can't normally do. But still... depending on just what you do with those tool calls and any other code that sits in your Agent implementation then it certainly could be more than "just" an LLM wrapper.

Nothing stops you from, for example, using the BDI architecture, implementing multi-level memory that's analogous to the way human memory works, wiring in some inductive learning, and throwing in some case-based reasoning, and an ontology based reasoning engine.

Most people today aren't doing this, because they're mostly johnny-come-lately's that don't know anything about AI besides what they see on Twitter, Reddit, and LinkedIn; and wouldn't know BDI from BDSM.

[1]: https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93...


Replies

freeamztoday at 8:06 AM

Agent seems like a process/worker/thread that is running LLM inference?