logoalt Hacker News

bertiltoday at 12:58 PM7 repliesview on HN

> the AI says things like “Interesting!”

My experience of those utterance is that it’s purely phatic mimicry: they lack genuine intuitive surprise, it’s just marking a very odd shift in direction. The problem isn’t the lack of path, is that the rhetorical follow-up to those leaps are usually relevant results, so they stream-of-token ends up rapidly over-playing its own conviction. That’s why it’s necessary (and often ineffective) to tell them to validate their findings thoroughly: too much of their training is “That’s odd” followed by “Eureka!” and not “Nevermind…”


Replies

etherealGtoday at 3:10 PM

And what I find fascinating is I see similar mimicking by my 5 year old. Perhaps we shouldn’t be so quick to call this a lack of being genuine. Sometimes emotions are learned in humans but we wouldn’t call them fake.

I don’t want to declare machines to have emotion outright, but to call mimicry evidence of falsehood is also itself false.

show 2 replies
jackcartertoday at 1:24 PM

It’s funny that this is probably due to bias in the training texts, right? Humans are way more likely to publish their “Eureka!” moments than their screwups… if they did, maybe models would’ve exhibit this behavior.

Now that AI labs have all these “Nevermind” texts to train on, maybe it’s getting easier to correct? (Would require some postprocessing to classify the AI outputs as successful or not before training)

show 2 replies
sigbottletoday at 1:10 PM

I think that a lot of models have to sprinkle in a lot of "fluff" in their thinking to stay within the right distribution. They only have language as their only medium; the way we annotate context is via brackets and then training them to hopefully respect the brackets. I'd imagine that either top labs explicitly train, or through the RL process the models implicitly learn, to spam tokens to keep them 'within distribution' since everything's going through the same channel and there's no fine grained separation between things.

Philosophically, it's not like you're a detached observer who simply reasons over all possible hypotheses. Ever get stuck in a dead end and find it hard to dig yourself out? If you were a detached observer, it'd be pretty easy to just switch gears. But it's not (for humans).

show 1 reply
hmontazeritoday at 2:29 PM

The new Opus 4.7 thinks quite often with: Hmmmm…

Haha anyone else seen this?

show 1 reply
epolanskitoday at 1:38 PM

Interestingly this is strikingly similar to how my mind would process something I find genuinely interesting.

animal531today at 1:39 PM

I've somehow managed to train mine out of trying to fluff me up the whole time, its become very factual.

Overall it saves me a lot of time reading when it's just focusing on the details.