logoalt Hacker News

vessenesyesterday at 3:12 PM2 repliesview on HN

Please enlighten me.


Replies

jiggawattsyesterday at 9:38 PM

How many synapses do you have right now in your brain?

You must be a stupid brain if you don’t even know that!

Similarly: you can’t use software to figure out the “process” used to manufacture the chip it is running on.

show 1 reply
wizzwizz4yesterday at 3:40 PM

Language models entirely lack introspective capacity. Expecting a language model to know what size it is is a category error: you might as well expect an image classifier to know the uptime of the machine it's running on.

Language models manipulate words, not facts: to say they "lie" suggests they are capable of telling the truth, but they don't even have a notion of "truth": only "probable token sequence according to distribution inferred from training data". (And even that goes out the window after a reinforcement learning pass.)

It would be more accurate to say that they're always lying – or "bluffing", perhaps –, and sometimes those bluffs correspond to natural language sentences that are interpreted by human readers as having meanings that correspond to actual states of affairs, while other times human readers interpret them as corresponding to false states of affairs.

show 1 reply