It's interesting just how many opinions Amodei shares with AI 2027's authors despite coming from a pretty different context.
- Prediction of exponential AI research feedback loops (AI coding speeding up AI R&D) which Amodei says is already starting today
- AI being a race between democracies and autocracies with winner-takes-all dynamics, with compute being crucial in this race and global slowdown being infeasible
- Mention of bioweapons and mirror life in particular being a big concern
- The belief that AI takeoff would be fast and broad enough to cause irreplaceable job losses rather than being a repeat of past disruptions (although this essay seems to be markedly more pessimistic than AI 2027 with regard to inequality after said job losses)
- Powerful AI in next few years, perhaps as early as 2027
I wonder if either work influenced the other in any way or is this just a case of "great minds think alike"?
It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.
Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.
In the AI scene, everyone knows everyone.
It used to be a small group of people who mostly just believed that AI is a very important technology overlooked by most. Now, they're vindicated, the importance of AI is widely understood, and the headcount in the industry is up x100. But those people who were on the ground floor are still there, they all know each other, and many keep in touch. And many who entered the field during the boom were those already on the periphery of the same core group.
Which is how you get various researchers and executives who don't see eye to eye anymore but still agree on many of the fundamentals - or even things that appear to an outsider as extreme views. They may have agreed on them back in year 2010.
"AGI is possible, powerful, dangerous" is a fringe view in the public opinion - but in the AI scene, it's the mainstream view. They argue the specifics, not the premise.