YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.
Suchir Balaji deserves to have his death investigated further.
> Altman does not recall the exchange.
Altman SAYS he does not recall the exchange. Not the same thing.
My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.
>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."
Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.
Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.
Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.
By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."
At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.
So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.
That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.
Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.
Sam failed upwards.
if you have to ask if someone can be trusted, they usually can't
The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.
> while Y.C. took a six- or seven-per-cent cut
shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.
This Sam Altman video is addictive. I could watch it over and over.
No
Seems this got buried from the front page very quickly
Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”
Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.
---
1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.
> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.
Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?
It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.
Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.
Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.
Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.
We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.
People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.
Right now times are only merely very bad.
For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.
I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.
Some concepts from the book:
> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.
> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.
> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.
> Trust your instincts over a person's social role (e.g., doctor, leader, parent)
Check and check.
OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.
Na, it will be Dario instead of Sam, Id say? :-))
he doesn't control his own future... chatgpt implodes in 18 months max depending upon how the strait of hormuz play goes...
Girls and boys, this is a prime example of a rhetoric question.
Interesting!
Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?
I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.
Therefore, I feel like “Sam Altman may control our future” is a far stretch.
Of course not. No one can be trusted to control our future.
No
If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.
Sounds like a snake pit. None of them can be trusted. If we have to rely on companies to self appoint a benevolent ‘AI dictator’ we’re fucked.
The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.
ugh, i don't understand why only altman scares you? what about google, china, and other players?
for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.
if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.
the concentration of power in one/two persons is the problem.
Does the article ever actually answer the title question?
No. Next question.
I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.
Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.
The overall response and particularly the body language speaks a lot.
This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.
I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.
The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.
As for the titular question, Betteridge's law of headlines applies. The answer is: No, we can't trust Sam Altman.
No.
Can Sam "The board can fire me, I think that's important." Altman be trusted?
If for no other reason, given what happened when the board fired him... no. I'd say not.
I don’t know, but any time I see an interview of Altman and I look at those eyes, I get creeped out.
The very idea of “trusting” monopoly capitalism.
Simple: NOOOOOOO!
How is this even a question?
I haven't read it yet. The answer is no.
It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?
Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.
This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.
I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.
I hope somebody just publishes The Ilya Memos. Sounds like a fun read
Ask Condé Nast if he can be trusted..
https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u