> we must assume that the best AI models (especially ones focusing solely in the medical field) would largely beat large majority of humans (aka doctors), if we already have this assumption for software engineers
You first have to assume this for software engineers. Not everyone agree with that (note: that doesn't mean the same people don't agree that AI is not _useful_).
AIs still have a ton of issues that would be devastating in a doctor. Remember all the AIs mistakingly deleting production DBs? Now imagine they prescribed a medicine cocktail that killed the patient instead. No thanks. There's a totally different bar to the consequences of mistakes.
Doctors make errors all the time though, so the real argument is about the error percentage. If AIs is lower then it's safer (but it's hard to have that convo, I recognise).
Besides; this article was about diagnosis not prescribing. It's pretty obvious, I think, that diagnosis is one area where AI will perform extremely well in the long run.
I think there are two metrics; the first is outright misdiagnosis, which studies put between 5 and 8% in US/Europe. That's a meaningful number to tackle.
Secondly; overdiagnosis. Where a Dr says on balance it could be X on a difficult to diagnose but dangerous problem (usually cancer). The impact of overdiagnosis is significant in terms of resources, mental health, cost etc.
In some subfields, like detection of security weaknesses in obscure C code, AI is already better than software engineers.
It is capable of sifting through enormous reams of data without ever zoning out etc. Once patients routinely use various wearables etc., they, too, will produce heaps of data to be analyzed, and AI will be the thing to go to when it comes to anomaly detection.
[dead]
Doctors do that all the time though. That's why drugs are dispensed by a pharmacist who double checks it.