Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."
If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.
Not saying it will be easy or straightforward. There's still a lot we don't know!
How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.
I wasn't explicit about this in my initial comment, but I don't think you can equate more forward passes to neuroplasticity. Because, for one, simply, we (humans) also /prune/. And... Similar to RL which just overwrites the policy, pushing new weights is in a similar camp. You don't have the previous state anymore. But we as humans with our neuroplasticity do know the previous states even after we've "updated our weights".