Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Not the first to notice this I'm sure but it feels like there's an insane amount of pressure pushing capital towards anything with a hint of AI legitimacy. It's as if asset owners across the planet have come to a consensus that the only industry that will matter going forward is this one (fair enough I guess), but this intense systemic pressure squeezes insane amounts of money toward litearlly any AI shaped outlet that opens up. It's just starting to feel like "scared and desperate" money more than "smart money".
AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)
(1) https://philippdubach.com/posts/the-most-expensive-assumptio...
scam
Unless they have alien data, this is bullshit.
"pre-money valuation" I don't know what that means but it makes me roll my eyes so hard it hurts
>It is unclear how, when, or how much the venture will make, but this clearly hasn’t hindered fundraising.
Sorta sums up the whole industry.