logoalt Hacker News

lynndotpytoday at 12:08 AM2 repliesview on HN

At the time of this writing, the prevailing thinking with "artificial intelligence" was that we'd encode every Fact we know and every rule of Logic, and from there, the computer would make new discoveries. Todays AI researchers would call this "symbolic" AI, compared to the "neural" AI powering LLMs. They're like two different worlds.

LLMs are just generating text, they don't know anything. They can't assess whether there is enough data for an answer. When you add a follow up prompt "This is wrong, why did you lie?" only then is it able to generate text, "I was wrong, I'm sorry," and so forth.


Replies

theturtletalkstoday at 12:28 AM

Did Asimov’s idea of AI revolve around data retrieval? I’ve read that even human intelligence isn’t necessarily remembering things, but being able to traverse our knowledge and find that idea or thought quickly.

lern_too_speltoday at 12:52 AM

They can read context and with fairly high accuracy say whether that context contains enough information to answer a posed question. They cannot (and we cannot for them) introspect their own weights to say whether their weights already encode information sufficient to answer a posed question.