A year or so ago, I fed my wife's blood work results into chatgpt and it came back with a terrifying diagnosis. Even after a lot of back and forth it stuck to its guns. We went to a specialist who performed some additional tests and explained that the condition cannot be diagnosed with just the original blood work and said that she did not have the condition. The whole thing was a borderline traumatic ordeal that I'm still pretty pissed about.
> I fed my wife's blood work results into chatgpt and it came back with a terrifying diagnosis
I don't get it... a doctor ordered the blood work, right? And surely they did not have this opinion or you would have been sent to a specialist right away. In this case, the GP who ordered the blood work was the gatekeeper. Shouldn't they have been the person to deal with this inquiry in the first place?
I would be a lot more negative about "the medical establishment" if they had been the ones who put you through the trauma. It sounds like this story is putting yourself through trauma by believing "Dr. GPT" instead of consulting a real doctor.
I will take it as a cautionary tale, and remember it next time I feed all of my test results into an LLM.
> it stuck to its guns
It gave you a probabilistic output. There were no guns and nothing to stick to. If you had disrupted the context with enough countervailing opinion it would have "relented" simply because the conversational probabilities changed.
I asked a doctor friend why it seems common for healthcare workers to keep the results sheets to themself and just give you a good/bad summary. He told me that the average person can't properly understand the data and will freak themselves out over nothing.
I fed about 4ish years of blood tests into an AI and after some back and forth it identified a possible issue that might signal recovery. I sheepishly brought it up with my doc, who actually said it might be worth looking into. Nothing earth shattering, just another opinion.
I think it's your problem you got stressed from a probabilistic machine answering with what you want to hear.
I am sorry I have to say so, but the value of LLM is their ability to reason based on their context. Don't use them as smart wikipedia (without context). To your use case, provide them with different textbook and practice handbook and with the medical history of the person. Then ask your question in a neutral way. Then ask it to verify their claim in another session and provide references.
It is so unfortunate that a general chatbot designed to answer anything was the first use case pushed. I get it when people are pissed.
Stories like yours are why I'm skeptical of these "health insight" products as currently shipped. Visualization, explanation, question-generation - great. Acting like an interpreter of incomplete medical data without a strong refusal mode is genuinely dangerous
> it stuck to its guns
Everyone that encounters this needs to do a clean/fresh prompt with memory disabled to really know if the LLM is going to consistently come to the same conclusion or not.
Isn't it two sides to the same coin?
You should be happy about it that it's not the thing specifically when the signs pointed towards it being "the thing"?
> "A year or so ago"
What model?
Care to share the conversation? Or try again and see how the latest model does?
> A year or so ago, I fed my wife's blood work results into chatgpt
Why would you consult a known bullshit generator for anything this important?
It's interesting because presumably you were too ashamed to tell the doctor "we pasted stuff into chatgpt and it said it means she is sick", because if you had said that he would have looked at the bloodwork and you could have avoided going to a specialist.
It's an interesting cognitive dissonance that you both trusted it enough to go to a specialist but not enough to admit using it.
>The whole thing was a borderline traumatic ordeal that I'm still pretty pissed about.
Why did you do the thing people calmly explained you should not do? Why are you pissed about experiencing the obvious and known outcome?
In medicine, even a test with "Worrying" results is rarely an actual condition requiring treatment. One reason doctors are so bad at long tail conditions is that they have been trained, both by education and literal direct experience, that chasing down test results without any symptoms is a reliable way to waste money, time, and emotions.
It's a classic statistics 101 topic to look at screening tests and notice that the majority of "positive" outcomes are false positives.
Gotta love the replies to this. At least more of the botheads are now acting like they're trying to ask helpful questions instead of just flat out saying "you're using it wrong."
Do you have a custom prompt/personality set? What is it?
Why not just ask WebMD?
You’re pissed about your own stupidity? In asking for deep knowledge and medical advice from a Markov chain?
Never ceases to surpise me why people taking word salad output so seriously.
And probably the same people laugh at ancient folks carefully listening to shamans.
Please keep telling your story. This is the kind of shit that medical science has been dealing with for at least a century. When evaluating testing procedures false positives can have serious consequences. A test that's positive every time will catch every single true positive, but it's also worthless. These LLMs don't have a goddamn clue about it. There should be consequences for these garbage fires giving medical advice.
On the flip side, i had some pain in my chest... RUQ (right upper quadrant for those medical folk).
On the way to the hospital, ChatGPT was pretty confident it was a issue with my gallbladder due to me having a fatty meal for lunch (but it was delicious).
After an extended wait time to be seen, they didnt ask about anything like that, and at the end they were like anything else to add, added it in about ChatGPT / Gallbladder... discharged 5 minutes later with suspicion of Gallbladder as they couldnt do anything that night.
Over the next few weeks, got test after test after test, to try and figure out whats going on. MRI. CT. Ultrasound etc.etc. they all came back negative for the gallbladder.
ChatGPT was persistant. It said to get a HIDA scan, a more specialised scan. My GP was a bit reluctant but agreed. Got it, and was diagnosed with a hyperkinetic gallbladder. It is still unrecognised as an issue, but mostly accepted. So much so my surgeon initally said that it wasnt a thing (then after doing research about it, says it is a thing)... and a gastroentologist also said it wasnt a thing.
Had it taken out a few weeks ago, and it was chroically inflammed. Which means the removal was the correct path to go down.
It just sucks that your wife was on the other end of things.