Speaking this as a spouse of a medical doctor -- case reports are sometimes a good way to increase the bullet point count in your CV if you are a medical resident. A lot of residents do that just for the sake of beefing up their CVs (to apply for fellowship for example).
> The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP.
I can see the reason where fictional cases could be used here as teaching aid - based on real cases/ilnesses but simplified to make the learning points succinctly, but surely if the cases are being cited elsewhere someone should have raised the issue earlier?
I think this is mainly a case of the common "didn't notice when crucial literature for own published content was retracted, get caught with pants down when the replication police come knocking".
Obviously the poor labelling is bad, but 9 bad citations per year isn't the end of science and better labelling wouldn't discourage all the lazy authors who chose to cite these highlight articles, it'll just shift whos is to blame.
The real problem is hosting a review article about research that was retracted, and it sounds like they aren't moving very quickly on taking that piece down.
This is fine, though somewhat belated. But it does nothing to deal with the public's growing distrust of science in general, and medical science in particular.
They had access to ChatGPT for last 25 years!
I don't mind the fact that the case reports were fictional -- actual cases can be problematic in terms of privacy as it may be easy to ascertain the patient's identity from the details -- but not putting a notice that it was fictional (or altered from a real case for privacy), for teaching purposes, is pretty bad.
In the era of GitHub etc, if you're not giving out every single data point of your research, it should be assumed it's fake.
https://onlinelibrary.wiley.com/doi/10.1111/jpc.14206
Maybe we should revisit the routine practice of infant male genital mutilation?
The detail that makes this more than a labeling error: the fictional nature appeared in the journal's author guidelines, not in the published articles. Researchers who cited these 61 papers had no way to distinguish them from genuine case reports. 218 citations later, the fiction is embedded in secondary analyses and literature reviews written by people who had no idea.
The "Baby Boy Blue" (2010) case is the clearest example of the harm. An infant allegedly exposed to opioids through breast milk. That case influenced clinical guidance on codeine safety in nursing for years. The CARE guidelines (Consensus-based Clinical Case Reporting Guidelines) exist specifically to create transparency in case reporting. They're voluntary, which is how a journal can run a 25-year undisclosed fiction program and technically say the authors knew.
Too late, it's already in the bloodstream, LLMs will be recommending things to pediatric doctors and families from fabricated archives for years, probably.
I think research should be assumed fiction until it’s peer reviewed.
What a mess.
> One author of a case report was surprised to learn of the correction — because the case described in her article is true.
So they managed to mess up even the correction of their giant mess.
> correcting the correction "would be difficult."
I bet. That's why they should have got it right in the first place. I would be absolutely ballistic if they would be libelling my work like that.