Good exit for him imo
Dude builds an Anthropic-themed vibe-coded app (calls himself an "Anthropoholic"), it becomes insanely popular, and also happens to be completely insecure, Anthropic pressures him to change project's name twice, he does, and finally OpenAI acquires the inventor.
Disappointing TBH. I completely understand that the OpenAI offer was likely too good to pass up, and I would have done the same in his position, but I wager he is about to find out exactly why a company like OpenAI isn't able to execute and deliver like he single-handedly did with OpenClaw. The position he is about to enter requires skills in politics and bureaucracy, not engineering and design.
Incredibly depressing comments in this thread. He keeps OpenClaw open. He gets to work on what he finds most exciting and helps reach as many people as possible. Inspiring, what dreams are made of really. Top comments are about money and misguided racism.
Personally I'm excited to see what he can do with more resources, OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it.
Isn't openai getting tanked because of its support of trump and ice?
the guy is Austrian... would prefer if the project evolved further but he used it as a trampoline to jump to OpenAi...
OpenAi is curating ChatGpt very well, which honestly I like, compared to other companies, maybe expect Anthropic, they are not "caring" that much
I hope this results in an OpenAI client harness where the data is local.
Time to uninstall
Congrats!
So many people are so salty, it’s wild. That’s peak HN here
Good luck!
good for you. make that money
Best way to democratize AI is to keep it as free or as inexpensive as possible.
Well, someone has to backfill Zoë Hitzig exiting.
Good thing Sam has no experience in transforming a foundation into for profit org ...
cant wait for this post to be memoryholed in 6 months when the community is a shell of its former self (no crustacean pun intended)
ok
His mum don't need an AI agent. She needs her family to pull their heads of out their asses and support her.
If the photo at the bottom of the post is a photo of the OpenAI team, then it’s white bros all the way.
Meaning, these products are being created by representatives of the kind of people carrying the most privilege and having the least impact of negative decisions.
For example, Twitter did not start sanitizing location data in photos until women joined the team and indicated that such data can be used for stalking.
White rich bros do not get stalked. This problem does not exist in their universe.
congrats @steipete!
Peter is already a multimillionaire — he had an exit a few years ago for around $100 million. By his own account, he's spending $10,000+ per month on LLM tokens and other development costs. As long as OpenClaw stays open source and it remains possible to use all providers, this is totally fine by me.
Honestly, Anthropic really dropped the ball here. They could have had such an easy integration and gained invaluable research data on how people actually want to use AI — testing workflows, real-world use cases, etc. Instead, OpenAI swoops in and gets all of that. Massive missed opportunity.
Haters gonna hate, but bro vibe-coded himself into being a billionaire and having Sam Altman and Zuck personally fight over him.
This reads simply as an “Our Incredible Journey” type of post, but written for an person rather than a company.
I wouldn’t be able to sleep at night knowing I have to work for Sam Altman. Dude’s gross.
Who cares?
What to understand of this whole story:
This is a vibe coded agent that is replicable in little time. There is no value in the technology itself. There is value in the idea of personal agents, but this idea is not new.
The value is in the hype, from the perspective of OpenAI. I believe they are wrong (see next points)
We will see a proliferation of personal agents. For a short time, the money will be in the API usage, since those agents burn a lot of tokens often for results that can be more sharply obtained without a generic assistant. At the current stage, not well orchestrated and directed, not prompted/steered, they are achieving results by brute force.
Who will create the LLM that is better at following instructions in a sensible way, and at coordinating long running tasks, will have the greatest benefit, regardless of the fact the OpenClaw is under the umbrella of OpenAI or not.
Claude Opus right now is the agent that works better for this use case. It is likely that this will help Anthropic more than OpenAI. It is wise, for Anthropic, to avoid burning money for an easily replicable piece of software.
Those hypes are forgotten as fast as they are created. Remember Cursor? And it was much more a true product than OpenClaw.
Soon, personal agents will be one of the fundamental products of AI vendors, integrated in your phone, nothing to install, part of the subscription. All this will be irrelevant.
In the mean time, good for the guy that extracted money from this gold mine. He looks like a nice person. If you are reading this: congrats!
(throw away account of obvious reasons)
Somehow we've normalized running random .exe on our devices. Except now it's markdown.exe and and you sound like a zealot when advocating against it.
Move fast and break things...
The tone of this blog post reads as incredibly snobby, self-congratulatory, main character syndrome.
Please dispense with the “change the world” bullshit.
I understand that it’s healthy to celebrate your personal victories but in this context with this bro going to OpenAI to make 7 figures, maaaan I don’t think this guy needs our clicks.
On top of that there’s a better than 50% chance OpenAI suffocates the open source project and the alternative will be a paid privacy nightmare.
>"What I want is to change the world"
Thank you, we already fucked. I am a hypocrite of course.
hahahahaha, bro jumped at the bag
Damn. I just installed OpenClaw on my M2 Mac and hopped on a plane for our SKO in LAX. United delayed the plane departure by 2 hours (of course) and diverted the flight to Honolulu. And Claw (that's the name of my new AI agent) kept me updated on my rebooking options and new terminal/gate assignments in SFO. All through the free WhatsApp access on United. AND, it refactored all my transferred Python code, built a graph of my emails, installed MariaDB and restored a backup from another PC. And, I almost forgot, fixed my 1337x web scrapping (don't ask) cron job, by CloudFlare-proofing it. All the while sitting in a shitty airline, with shitty food and shittier seats, hurtling across the pacific ocean.
The future is both amazing and shitty.
Hope OpenClaw continues to evolve. It is indeed an amazing piece of work.
And I hope sama doesn't get his grubby greedy hands on OpenClaw.
Never understood the hype. Good for the guy but what was the product really? And he goes on and on about changing the world. Gimme a break. You cashed out. End of story.
OpenClaw is literally the most poorly conceived and insecure AI software anyone has ever made. Its users have had OpenClaw spend thousands of dollars, and do various unwanted and irreversible things.
This fucking guy will fit right in at OpenAI.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
This is a jew who has a Github profile that's entirely AI slop
OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it.
We're working on security and about 3 very key architectural improvements.
[flagged]
This feels less like an acquisition and more like signaling. OpenClaw isn’t infrastructure, it’s an experiment, and its value is narrative: “look what one person can build with our models.” OpenAI gets PR, optional talent, and no obligation to ship something deterministic.
The deeper issue is that agent frameworks run straight into formal limits (Gödel/Turing-style): once planning and execution are non-deterministic, you lose reproducibility, auditability, and guarantees. You can wrap that with guardrails, but you can’t eliminate it. That’s why these tools demo well but don’t become foundations. Serious systems still keep LLMs at the edges and deterministic machinery in the core.
Meta: this comment itself was drafted with ChatGPT’s help — which actually reinforces the point. The model didn’t decide the thesis or act autonomously; a human constrained it, evaluated it, and took responsibility. LLMs add real value as assistive tools inside a deterministic envelope. Remove the human, and you get the exact failure modes people keep rediscovering in agent frameworks.
So this is how you apply for a job in 2026...