Yes, of course someone should have investigated, but the larger point here is that people don’t because they are being sold a false narrative that AI is infallible and can do anything.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
We can barely convince powers thar be that eye-witness testimony is unreliable, after all.
Where are you seeing people being told that AI is infallible? AI is being hyped to the moon, but "infallible" is not one of the claims.
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.