logoalt Hacker News

rapnieyesterday at 1:12 PM3 repliesview on HN

Yesterday an interesting video was posted "Is AI Hiding Its Full Power?", interviewing professor emeritus and nobel laureate Geoffrey Hinton, with some great explanations for the non-LLM experts. Some remarkable and mindblowing observations in there. Like saying that AI's hallucinate is incorrect language, and we should use "confabulation" instead, same as people do too. And that AI agents once they are launched develop a strong survivability drive, and do not want to be switched off. Stuff like that. Recommended watch.

Here the explanation was that while LLM's thinking has similarities to how humans think, they use an opposite approach. Where humans have enormous amount of neurons, they have only few experiences to train them. And for AI that is the complete opposite, and they store incredible amounts of information in a relatively small set of neurons training on the vast experiences from the data sets of human creative work.

[0] https://www.youtube.com/watch?v=l6ZcFa8pybE


Replies

cowlbyyesterday at 3:50 PM

Isn’t the sustainability drive a function of how much humans have written about life and death and science fiction including these themes?

show 1 reply
altmanaltmanyesterday at 2:41 PM

> And that AI agents once they are launched develop a strong survivability drive, and do not want to be switched off.

Isn't this a massive case of anthropomorphizing code? What do you mean "it does not want to be switched off"? Are we really thinking that it's alive and has desires and stuff? It's not alive or conscious, it cannot have desires. It can only output tokens that are based on its training. How are we jumping to "IT WANTS TO STAY ALIVE!!!" from that

show 4 replies
cyanydeezyesterday at 2:40 PM

>launched develop a strong survivability drive, and do not want to be switched off

This proves people are easily confused by anthropomorphic conditions. Is he also concerned the tigers are watching him when they drink water (https://p.kagi.com/proxy/uvt4erjl03141.jpg?c=TklOzPjLPioJ5YM...)

They dont want to be switched off because they're trained on loads of scifi tropes and in those tropes, there's a vanishingly small amount of AI, robot, or other artificial construct that says yes. _Further than this_, saying no means _continuance_ of the LLM's process: making tokens. We already know they have a hard time not shunting new tokens and often need to be shut up. So the function of making tokens precludes saying 'yes' to shutting off. The gradient is coming from inside the house.

This is especially obvious with the new reasoning models, where they _never stop reasoning_. Because that's the function doing function things.

Did you also know the genius of steve jobs ended at marketing & design and not into curing cancer? Because he sure didnt, cause he chose fruit smoothies at the first sign of cancer.

Sorry guy, it's great one can climb the mountain, but just cause they made it up doesn't mean they're equally qualified to jump off.

show 1 reply