logoalt Hacker News

CGMthrowawayyesterday at 1:38 PM13 repliesview on HN

>INSUFFICIENT DATA FOR MEANINGFUL ANSWER

Boy, it sure would be nice if real LLMs were capable of giving an answer like that.


Replies

temp0826yesterday at 4:15 PM

Living in South America a bit really showed me this. I think it's a cultural thing here but someone will always give you an answer, even if it's wrong, confidently. It was hard for me at first- I am usually the first person to say "I don't know" (often followed by "but let's slow down and find a good solution").

show 3 replies
lynndotpytoday at 12:08 AM

At the time of this writing, the prevailing thinking with "artificial intelligence" was that we'd encode every Fact we know and every rule of Logic, and from there, the computer would make new discoveries. Todays AI researchers would call this "symbolic" AI, compared to the "neural" AI powering LLMs. They're like two different worlds.

LLMs are just generating text, they don't know anything. They can't assess whether there is enough data for an answer. When you add a follow up prompt "This is wrong, why did you lie?" only then is it able to generate text, "I was wrong, I'm sorry," and so forth.

show 2 replies
gwerbinyesterday at 2:22 PM

They can do it, it's just not "by default", they need to be prompted to do it. So at least the danger is manageable if you know what you're doing and how to prompt around it.

show 3 replies
cortesoftyesterday at 3:14 PM

There are a lot of humans who refuse to give that answer, too

show 2 replies
amdiviayesterday at 9:02 PM

Exactly!!

I've been trying to work on a new LLM code editor that does just that. When you instruct it to do something, it will evaluate your request, try to analyze the action part of it, the object, subject, etc, and map them to existing symbols in your codebase or, to expected to be created symbols. If all maps, it proceeds. If the map is incomplete, it errors out stating that your statement contained unresolvable ambiguity

I think there is a real benefit here, and it might be the actual next beneficial grounded AI sustainable use in programming. Since I the current "Claude code and friends" are but a state of drunkenness we fell into after the advent of this new technology, but it will prove, with time, that this is not a sustainable approach

bargainbinyesterday at 1:55 PM

You’re absolutely right! I do have insufficient data for a meaningful answer. This is not an *insightful prediction* — it’s *Dunning-Kruger masquerading as qualified intelligence*

show 2 replies
ItsClo688today at 1:56 AM

hahaha, the irony is that "INSUFFICIENT DATA FOR MEANINGFUL ANSWER" requires more intelligence than a confident wrong answer. you have to know what you don't know. current LLMs are optimized to always produce output, which means they've essentially been trained out of epistemic humility.

Asimov's Multivac at least had the dignity to wait.

in-silicoyesterday at 9:24 PM

As measured by #_no_answer/(#_incorrect + #_no_answer) the top current models can do it 60-70% of the time (Grok 4.20 is the best with 83%): https://artificialanalysis.ai/evaluations/omniscience

ryanjshawyesterday at 2:59 PM

I reckon that’s how we know we’ve hit ASI.

narginalyesterday at 2:54 PM

2061, mark the date

ButlerianJihadtoday at 1:33 AM

This is exactly like a lot of customer service, or technical support.

It seems that they are loath to tell anyone “no”, or that something can’t be done, or that an app doesn’t have a feature or can’t be used in a certain way. Especially when a feature has been removed for security reasons.

In fact, it gets so crazy that I simply cannot get a straight answer out of somebody and if I persist in my line of questioning and they become evasive or vague or I just can’t get a straight answer for long enough, ultimately, I suspect that the answer is “no”, and that they're simply not allowed to tell me, and they're paid and trained specifically to avoid uttering the “n-word”.

In my first job, as a network operator, my supervisor admonished me, and said “we must never tell a customer that we don't know something”. He said that we should tell the customer that “I will go ahead and find out for you, and get back to you on that”.

And that is kind of the kind of slippery non-answer I often received in my most recent job, that some manager or supervisor would “look into something” for me and “get back to me”. But the ‘getting back to me’ part never happened, and I began to suspect that it was a platitude meant to satisfy me enough that I would shut up for a while, and stop pressing the issue.

otikikyesterday at 3:24 PM

Just add a skill to Claude

qserayesterday at 11:33 PM

I just came from reddit and seeing this comment, looked for "controversial" sort option instinctively.

Maybe hackernews is becoming reddit...