It's having a general understanding/view of the "baseline", aka healthy anatomy. This is something LLMs will never have, that's why never have true reasoning, for the lack of "worldview" and they never know if they are hallucinating. To aid doctors, we don't need LLMs but rather, computer vision, pattern recognition as you correctly point out.
But it's important not to rely on it. Doctors can easily recognize and correct measurements with incorrect input, e.g. ECG electrodes being used in reverse order.
>It's having a general understanding/view of the "baseline", aka healthy anatomy. This is something LLMs will never have
You're making the mistake of conflating AI with LLMs.
I don't think LLMs will reliably be better than a board of doctors. But an Expert System probably will (if it isn't already). That's literally what they were created for.
The biggest downside of LLMs IMO isn't the millions of Jules wasted on training models that are ultimately used to create funny images of cats with lasers. It's that all that money isn't being invested into truly helpful AI systems that will actually improve and save our lives, such as medical expert systems.