Yet in modern computer games, modern graphics cards denoise a scene in real-time at 60 frames per second using machine learning models [1][2] while doing all the other rendering at the same time. Granted, that's ray tracing, and the resolution is lower, and they technically cheat by using additional information, but it might be that DXO is not optimized very well.
1: https://blogs.nvidia.com/blog/ai-decoded-ray-reconstruction/
Games will typically un at 4k or less which is about 8mp on the other hand it’s difficult to buy a stills camera with less than 20 mp, and >40mp is common. Most algorithms are n^2 in graphics as well so we wouldn’t expect a linear speed up. I’ve tried dxo, Lightroom, and topaz they all perform about the same so I don’t think it’s particularly unoptimized.