A huge part of that, for me, was this from Scott Aaronson:
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
That quote, alone, removed a lot of assumptions I had been carrying around.
Can anyone give the next layer of detail here? I understand the implications of this analogy, but looking for the underlying reasons the analogy is apt.
That quote alone proves that the author knows nothing about nuclear physics.
There is a critical flux/density/mass threshold for nuclear bombs. You can create small nuclear explosions with particle accelerators, which is how it all started. You just cannot scale those accelerators to anything macroscopic. But the microscopic explosions where done very very early, otherwise nobody would have had the necessary data to later extrapolate this to larger scales.
The interesting question after that first discovery of fission was only about how large the critical density or mass would be for a self-sustaining reaction. But as soon as you knew the critical mass, and had enough fissile material to go over that threshold, things became feasible, and easier with even more material.
Quantum computing doesn't have such a threshold, quite the opposite. As far as we know, larger problem sizes and larger numbers of qbits make things harder. Quantum error correction only changes the exponent in that relation.