logoalt Hacker News

emp17344today at 4:49 AM2 repliesview on HN

But the systems that do that impressive work are no longer just LLMs. Look at the Claude Code leak - it’s a sprawling, redundant maze relying on tools and tests to approximate useful output. The actual LLM is a small portion of the total system. It’s a useful tool, but it’s obviously not truly intelligent - it was hacked together using the near-trillions of dollars AI labs have received for this explicit purpose.


Replies

tptacektoday at 4:52 AM

What does this matter? You can build a working coding agent for yourself extremely quickly; it's remarkably straightforward to do (more people should). But look underneath all the "sprawling tools": the LLM itself is a sprawling maze of matrices. It's all sprawling, it's all crazy, and it's insane what they're capable of doing.

Again if you want to say they're limited in some way, I'm all ears, I'm sure they are. But none of that has anything to do with "statistical text generation". Apparently, a huge chunk of all knowledge work is "statistical text generation". I choose to draw from that the conclusion that the "text generation" part of this is not interesting.

show 1 reply
sigbottletoday at 4:40 PM

I think you're actually making a point but overall still disagree.

I do think LLM's are evolving towards this kind of embodied cognition type intelligence, in virtue of how well they interoperate with text. I mean, you don't need to "make the text intelligible" to the LLM, the LLM just understands all kinds of garbage you throw at it.

Now the question is: Is intelligence being able to interoperate?

In the traditional sense, no. Well, in a loose sense, yes, because people would've said that intelligence is the ability to do anything, but that's not a useful category (otherwise, traditional computer programs would be "intelligent"). But when I hear that, I think something like "The models can represent an objective reality well, it makes correct predictions more often than not, it's one of these fictional characters that gets everything and anything right". This is how it's framed in a lot of pop culture, and a lot of "rationalist" (lesswrong) style spaces.

But if LLM's can understand a ton of unstructured intent and interoperate with all of our software tools pretty damn well... I mean, I would not call that "a bunch of hacks". In some sense, this is an appeal to the embedded cognition program. Brain in a vat approach to intelligence fails.

But it clearly enables new capabilities that previously were only possible with human intelligence. In a very blatant negative form: The surveillance state is 100% now possible with AI. It doesn't take deep knowledge of Quantum Physics to implement, with a large amount of engineering effort, data pipelines and data lakes, and to have LLM's spread out throughout the system, monitoring victims.

So I'd call it intelligence, but with a qualifier to not slip between slippery slopes. It may even be valid to call the previous notion of intelligence a bad one, sure. But I think the issue you may be running into is that it feels like people are conflating all sorts of notions of intelligence.

Now, you can add an ad hoc hypothesis here: In order to interoperate, you have to reason over some kind of hidden latent space that no human was able to do before. Being able to interoperate is not orthogonal to general intelligence - it could be argued that intelligence is interoperation.

If you're arguing for embodied cognition, fine, we agree to some extent :)

The fear is that the AI clearly must be able to emulate, internally, a latent space that reflects some "objective notion of reality". If it did that, then shit, this just breaks all of the victories of empiricism, man. Tell me about a language model that can just sit in a vat, and objectively derive quantum mechanics by just thinking about it really hard, with only data from before the 1900s.

I don't think you need to be this caricature of intelligence to be intelligent, is what I'm saying, and interoperability is definitely a big aspect of intelligence.

show 1 reply