What I find interesting is the supposition that weights must change. The connections of my motherboard do not change, yet it can simulate any system.
Perhaps there is an architecture that is write-once-read-forever, and all that matters is context.
There's almost certainly some of this in the human mind, and I bet there is much more of it than we are willing to admit. No amount of mental gymnastics is going to let you visualize 6D structures.
>supposition that weights must change
The thing is that's where most of the leaning and 'intelligence' is. If you don't change them the model doesn't really get smarter.