What's the difference between a Mac Mini and a MacBook in clamshell mode for this? I get the aesthetic appeal of the mini, but beyond that, what's unique about the mini for personal use?
No, Apple ecosystem is bad enough already in software terms. Just let me use my computer as I want.
"An idiot admires complexity, a genius admires simplicity." Terry A. Davis
Can't wait to buy a new ClawBook.
Trust takes years to build, seconds to break, and forever to repair.
A security and privacy disaster?
This post reads like an apple fanfiction
> ten years from now, people will look back at 2024-2025 as the moment Apple had a clear shot at owning the agent layer and chose not to take it
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
OP site only has 2 posts, both about OpenClaw, and “About” goes to a fake LinkedIn profile with an AI photo.
Welcome to the future I guess, everyone is a bot except you.
Unfortunately by not doing that they only managed to be the most valuable company ever.
So yeah, the market isn’t really signaling companies to make nice things.
Apple doesn’t enable 3rd party services without having extreme control over the flow and without it directly benefiting their own bottom line.
"Not Final Cut. Not Logic. An AI agent that clicks buttons."
...and that writes blog posts for you. So tired of this voice.
This is the most obviously AI written text I think I've ever read.
Mac minis out of stock because of OpenClaw?
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
I completely disagree. 1. OpenClaw's design is ugly. 2. Its security is extremely worrying. 3. I hate this kind of marketing.
Personal opinion.
How much revenue do you think Apple made EXTRA from people buying Mac minis for this hype?
This is Yellow Pages type thinking in the age of the internet. No one is going to own an agentic layer (list any of the multitude of platforms already irrelevant like OpenAI Agent SDK, Google A2A) . No one is going to own a new app store (GPTs are already dead). No one is going to foundation models (FOSS models are extremely capable today). No one is going to own inference (Data centers will never be as cost effective as that old MacBook collecting dust that is plenty capable of running a 1B model that can compete with ChatGPT 3.5 and all the use cases that it already was good at like writing high school essays, recipes etc.) The only thing that is sticking is Markdown (SKILLS.md, AGENTS.md)
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
It's just the juiciest attack surface of all time so I vehemently disagree.
If you can’t see why something like OpenClaw is not ready for production I don’t know what to tell you. People’s perceptions are so distorted by FOMO they are completely ignoring the security implications and dangers of giving an LLM keys to your life.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
People actually use this kind of software today ? When I read OpenClaw description : "The AI that actually does things. Clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.". It does not appeal to me at all. I wouldn't trust an IA agent near my mails, calendars, messages, flights or anything it could mess-up with. It sounds like a security nightmare waiting to happen.
No, thank you.
This! Def a game changer for apple.
They have all the time in the world, practically. OpenClaw is nowhere near an Apple product for myriad reasons. When Apple is able to build an agent that is safe and reliable, they will.
Such a fresh read
> If you browse Reddit or HN, you’ll see the same pattern: people are buying Mac Minis specifically to run AI agents with computer use.
Saved you a click. This is the premise of the article.
“People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things.”
Steve Jobs
I keep seeing posts about OpenClaw (I still didn’t try it myself) but I don’t get the constant references to the Mac Minis.
Why are people needing the Mac Minis? Isn’t OpenClaw supposed to run locally in your laptop?
And if it actually should run as a service, why a MacMini and not some docker on the local NAS for instance?
>Something strange is happening with Mac Minis. They’re selling out everywhere
Straight up bullshit.
You need a super efficient and integrated empowered model private and offline. The whole architecture hardware distribution supply chain has to be rewritten to make this work the way people want.
- I give openclaw another 3 months before it fades into obscurity
I think openclaw is proving that the use case while promising is very much too early and nobody can ship a system like that that works the way a consumer expects it to work.
I used to have little cron jobs that would fire small python scripts daily to help me detect when certain clothes were on sale or in stock on a website it scraped and then send me an email or text. I was proud of that “automation”.
I guess now I’ll just use an AI agent to do the same thing instantly :(
> Imagine if Siri could genuinely file your taxes
No sane person would let an AI agent file their taxes
Oh yeah nothing like all my data being sent to a third party and access to all my apps. JFC people…
Yes, and I am glad OpenClaw built it first, so Apple doesn’t do such a terrible mistake.
...And it will be, now that Apple has partnered with OpenAI. The foundation of OpenClaw is capable models.
pretty strong disagree; Apple can't afford to potentially start an AI apocalypse because it tried to launch an OpenClaw type service without making it impossible for prompt-injection or agent identity hijacking to happen as we're seeing with Moltbook
Let OpenClaw experiment and beta test with the hackers who won't mind if things go sideways (risk of creating Skynet aside), and once we've collectively figured out how to create such a system that can act powerfully on behalf of its users but with solid guardrails, then Apple can implement it.
I genuinely don't understand this take. What makes OP think that the company that failed so utterly to even deliver mediocre AI -- siri is stuck in 2015! -- would be up to the task of delivering something as bonkers as Clawdbot?
> Imagine if Siri could genuinely file your taxes
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
The author must have drunk unhealthy amounts of koolaid.
No no no. It's too risky, cutting-edge, and dangerous. While fun to play with, it's not something I'd trust my 92 year old mother with dementia (who still uses an iPad) with.
No. Emphatically NOT. Apple has done a great job safeguarding people's devices and privacy from this crap. And no, AI slop and local automation is scarcely better than giving up your passwords to see pictures of cats, which is an old meme about the gullibility of the general public.
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
This is how I feel:
https://www.instagram.com/reels/DIUCiGOTZ8J/
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.
[dead]
[dead]
[dead]
That is an idealistic take without business sense. Startups (and individual hackers in this case) exists to take this kind of radical bets because the risk/reward profile is asymmetrically in their favour. Whereas for an enterprise, the risk/reward is inverse.
If Peter Steinberger is able to generate even a 100M this year from Clawdbot what he has is a multi billion dollar business that would be life-changing even for a successful entrepreneur like him who is already a multi-millionaire. If it collapses from the security flaws, and other potential safety issues he loses nothing, starting from zero and going back to it. Peter Steinberger (and startups in general) have a lot to gain and very little or close to nothing to lose.
The iPhone generated 400B in revenue for Apple in 2025. Clawdbot even if it contributes 4B in revenue this very year would not move the needle much for Apple. On the contrary, if Apple rushes and botches releasing something like this they might just collapse this 400B/annum income stream. Apple and other large enterprises (and their execs) have a lot to lose and very little to gain from rushing into something like this.