logoalt Hacker News

NicuCalceayesterday at 7:33 PM2 repliesview on HN

> a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients

I think that's the problem. Who's going to claim responsibility when ChatGPT hallucinates or mistranslates a patient's diagnosis and they die? For OpenAI, this would at best be a PR nightmare, so that's why they have safeguards.


Replies

rvnxyesterday at 8:19 PM

Adults bear responsibility for choices about their own lives. In fact, the more educated they are, the better choices they can make.

A doctor who gets refused by ChatGPT doesn't stop needing to communicate with the patient; they fall back to a worse option (Google Translate, a family member interpreting, guessing). Refusal isn't safety, it's liability-shifting dressed up as safety.

If there's no doctor, no interpreter, no pharmacist, just a person with a sick kid and a phone, then "refuse and redirect to a professional" is advice from a world that doesn't exist for them. The refusal doesn't send them to a better option; there is no better option, it's a large majority of people on this planet.

Hell is paved of good intentions, but open-education and unlimited access to knowledge is very good.

It doesn't change the human nature of some people, bad people stay bad, good people stay good.

About PR, they're optimizing for not being the named defendant in a lawsuit or the subject of a bad news cycle, it's self-interest wearing benevolence as a costume.

This is because harms from answering are punishable (bad PR, unhappy advertisers, unhappy investors, unhappy politicians / dictators, unhappy lobbies, unhappy army, etc); but harms from refusing are invisible and unpunished.

show 2 replies
hellohello2yesterday at 7:36 PM

The doctor would be responsible.

I had a choice better a doctor that used AI or not, I would much prefer one that did...

show 1 reply