Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
You'd probably have to embody it.
[dead]
> If it is possible that's exciting.
Would it be exciting though? I mean it would certainly excite some things, but I don’t know that it would be something to rejoice.