Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."
I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"
It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!
Reading this makes me even happier to pay for Anthropic.
Amodei and his sister saw through the behavior and called it out.
" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.
Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.
FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.
For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.
At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...
Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.
Fantastic reporting.
I remember reading these direct quotes from SA in 2016 from the New Yorker and thinking, yeah, this guy is just miserable:
> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”
> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”
> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.
Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.
I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).
However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.
[1] is also good to read as a follow-up, and compare the personalities
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.
Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?
It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.
> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."
This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.
Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.
A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!
I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.
I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
Why is the story so downranked? Folks at HackerNews have something to do with it ?
Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
This statement rings true.
JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.
that animation of Altman with a thousand faces is oddly unsettling. Good job, new yorker.
> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”
I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.
Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.
It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.
Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.
We really really need a way for our society to be more equitable and hold these people responsible.
It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.
One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
Really solid piece of journalism. I understand some stuff ends up on the cutting room floor in the editing process as length is eventually a factor. What was the one thing you most regret having to cut out of the final piece?
Would you trust a guy who controls a magical orb that answers everyone's questions for free and satisfactorily enough that people basically pay money to talk more to it, to use it responsibly? I won't.
he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.
We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
I bet Satya Nadella is regretting defending Altman now.
"If I don't destroy humanity someone far worse will do it" -Sam Altman
The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
I assume stuff gets cut for length in the editing process. What was the thing you wish had remained that was edited out?
Greg Brockman honestly sounds like a psychopath:
> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?
I wonder if Sam might abandon the ship soon. Other co-founders already did.
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
Altman's character is almost irrelevant next to how frictionless it is for a handful of people to set defaults for millions.
Beyond the question of should we trust Sam Altman to control our future - why on Earth should we want any single individual to control our future at all?
Suchir Balaji deserves to have his death investigated further.
I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
It seems unlikely OpenAI can survive long term with Sam at the helm. Challenge is folks already realized that once and yet here we are.
Control + Altman + Delete
Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.