logoalt Hacker News

TSiegeyesterday at 1:13 AM13 repliesview on HN

There are a few take aways I think the detractors and celebrators here are missing.

1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months.

3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents.

LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0


Replies

abaloneyesterday at 3:43 AM

I think this comment misses that OpenAI hired the guy, not the project.

"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.

The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.

show 4 replies
nilknyesterday at 2:09 AM

This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation.

Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.

show 2 replies
Aurornisyesterday at 1:34 AM

> This buy out for something vibe coded

I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI.

There are two sentences at the top that sum it up:

> I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.

OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product.

show 6 replies
ass22yesterday at 2:00 AM

"build a hugely popular tool"

Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw.

show 5 replies
simbleauyesterday at 3:53 AM

I think they want the man and ideas behind the most useful AI tool thus far. Surprisingly, and OpenAI may see this - it is a developer tool.

OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation.

OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily.

neyayesterday at 5:17 AM

I am not a fan of OpenAI but they are not exactly hiring a security researcher. They are hiring an aspiring builder who has built something the masses love. They can always provide him the structure and support he needs to make his products secure. It's not mutually exclusive (safety vs hiring him).

jsemrauyesterday at 3:09 AM

What is interesting about OpenClaw is it's architecture. It is like an ambient intelligence layer. Other approaches up until now have been VSCode or Chromium based integrations into the PC layer.

jrsjyesterday at 1:43 AM

There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy.

I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.

show 1 reply
avaeryesterday at 5:03 AM

> The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

This is true, and also true for many other areas OpenAI won't touch.

The best get rich quick scheme today (arguably not even a scheme) is to test the waters with AI in an area OpenAI would not/cannot for legal, ethical, or safety reasons.

I hate to agree with OpenAI's original "open" mission here, but if you don't do it, someone else somewhere will.

And as much as their commitment to safety is just lip service, they do have obligations as a big company with a lot of eyeballs on them to not do shady things. But you can do those shady things instead and if they work out ok, you will either have a moat or you will get bought out. If that's what you want.

motoboiyesterday at 2:36 AM

This is basically acquihire. Peter seems really a genius and they better poach him before Anthropic do.

show 2 replies
mjr00yesterday at 1:18 AM

> 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.

This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt.

show 2 replies
alephnerdyesterday at 1:49 AM

Most of these are good callouts, but I think it is best for us to look at the evolution of the AI segment in the same manner as "Cloud" developed into a segment in the 2000s and 2010s.

3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW.

1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week.

2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity".

> They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released

I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost.

Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind.

---

All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months.

By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee).

isodevyesterday at 3:42 AM

> 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move.

And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety.