logoalt Hacker News

themafiatoday at 12:39 AM5 repliesview on HN

> we're not actually on the right track to achieve real intelligence.

Real intelligence means you have to say "I don't know" when you don't know, or ask for help, or even just saying you refuse to help with the subtext being you don't want to appear stupid.

The models could ostensibly do this when it has low confidence in it's own results but they don't. What I don't know if it's because it would be very computationally difficult or it would harm the reputation of the companies charging a good sum to use them.


Replies

vintagedavetoday at 9:22 AM

> Real intelligence means you have to say "I don't know" when you don't know

I have met many supposedly intelligent, certainly high status, humans who don't appear to be able to do that either.

I have more confidence we can train AIs to do it, honestly.

show 1 reply
cmrdporcupinetoday at 1:15 AM

That's just not how they work, really. They don't know what they don't know and their process requires an output.

I think they're getting better at it, but it's likely just the number of parameters getting bigger and bigger in the SOTA models more than anything.

show 1 reply
bluefirebrandtoday at 12:52 AM

My theory is because the people building the models and in charge of directing where they go love the sycophantic yes-man behavior the models display

They don't like hearing "I don't know"

colechristensentoday at 12:56 AM

You can TELL the models to do this and they'll follow your prompt.

"Give me your answer and rate each part of it for certainty by percentage" or similar.

show 1 reply
wagwangtoday at 1:43 AM

You can just tell the agent to do exactly that

show 2 replies