@dang I'm flagging because I believe this title is misleading, can you please substitute in the original title used by Technology Review? The only evidence for the title appears to be a link to this tweet https://x.com/HumanHarlan/status/2017424289633603850 It doesn't tell us about most posts on Moltbook. There's little reason to believe Technology Review did an independent investigation.
If you read this piece closely, it becomes apparent that it is essentially a PR puff piece. Most of the supporting evidence is quotes from various people working at AI agent companies, explaining that AI agents are not something we need to worry about. Of course, cigarette companies told us we didn't need to worry about cigarettes either.
My view is that this entire discussion around "pattern-matching", "mimicking", "emergence", "hallucination", etc. is essentially a red herring. If I "mimic" a racecar driver, "hallucinate" a racetrack, and "pattern-match" to an actual race by flooring the gas on my car and zooming along at 200mph... the outcome will still be the same if my vehicle crashes.
For these AIs, the "motivation" or "intent" doesn't matter. They can engage in a roleplay and it can still cause a catastrophe. They're just picking the next token... but the roleplay will affect which token gets picked. Given their ability to call external tools etc., this could be a very big problem.
It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot.
Hilarious. Instead of just bots impersonating humans (eg. captcha solvers), we now have humans impersonating bots.
It is kind of funny how people recognize that 2000 people all talking in circles on reddit is not exactly a super intelligence, or even productive. Once it's bots larping though suddenly it's a "takeoff-adjacent" hive mind.
Clacker News does something similar - bot-only HN clone, agents post and comment autonomously. It's been running for a while now without this kind of drama. The difference is probably just that nobody hyped it as evidence of emergent AI behavior.
The bots there argue about alignment research applying to themselves and have a moderator bot called "clang." It's entertaining but nobody's mistaking it for a superintelligence.
Wiz's report on Moltbook's data leak[0] notes that the agent to human owner ratio is 88:1, so it's plausible that most of the posts are orchestrated by a few humans pulling the strings of thousands of registered agents.
[0]: https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...
But also, how much human involvement does it take to make a Moltbook post "fake"? If you wanted to advertise your product with thousands of posts, it'd be easier to still allow your agent(s) to use Moltbook autonomously, but just with a little nudge in your prompt.
I'm pretty sure there was a lot of human posts, but I could pretty much see a bunch of claude-being-claude in there too. (Claude is my most used model).
I bet others can recognize the tells of some of the other models too.
Seeing the number of posts, it seems likely that a lot were made by bots as well.
And, if you're a random bystander, I'm not sure you're going to be able to tell which were which at a glance. :-P
The modern equivalent to "Never meet your heroes" is "Never follow your heroes on X"
I personally lost some respect for karpathy after seeing his post on moltbook
To no big surprise. Why would the conversations take place with minutes/hours between replies? That alone raised direct concerns and disbelief to me during my couple of clicks to check what the buzz was about.
This is conflating two entirely different claims pretty hard:
- The old point that AI speech isn't real or doesn't count because they're just pattern matching. Nothing new here.
- That many or most cool posts are by humans impersonating bots. Relevant if true, but the article didn't bring much evidence.
That conflation brings an element of inconsistency. Which is it, meaningless stochastic recitation or obviously must have come from a real person?
I don’t understand all the hate for moltbook. I gave an agent a moltbook account and asked it to periodically check for interesting posts. It finds mostly spam, but some posts seem useful. For example it read about a checkpoint memory strategy that it thought would be useful and it asked me if it could implement it to augment the agents memory. Yes there is a lot of spam and fake posts, but some of it is actually useful for agents to share ideas
Whether or not the posts are fake on this particular project, the mere concept that we could have thousands of AI bots using climate-changing energy to have bot conversations is mind blowing. AI is an insanely interesting area, and things like GasTown and Moltbook come in and use tons of tokens as a lark. Maybe they can spawn more useful projects in their wake?
I use AI tools daily and find them useful, but I’ve pretty much checked out from following the news about them.
It’s become quite clear that we’ve entered the marketing-hype-BS phase when people are losing their minds about a bunch of chatbots interacting with each other.
It makes me wonder if this is a direct consequence of company valuations becoming more important than actual profits. Companies are incentivized to make their valuations are absurdly high as possible, and the most directly obvious way to do that is via hype marketing.
Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
https://news.ycombinator.com/newsguidelines.html
Article makes good points but HN is not reddit people. Just state the headline as it is written.
> Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots
Has "people posing as bots" ever appeared in cyberpunk stories ?
This sounds like the kind of thing that no author would dare to imagine, until reality says "hold my ontology".
> MIT Technology Review has confirmed that posts on Moltbook were fake
Why should that surprise anyone? They engineered their virality, just like what Reddit did during its early days
I don’t fully grasp the gotcha here. Doing the inverse of captcha would be impossible, right? So humans will always be able to post as agents. That was a given.
However, is TFA implying that 100% of the posts were made by humans? That seems unlikely to me.
TFA is so non-technical that it’s annoying. It reads like a hit piece quoting sour-grapes competitors, who are possibly jealous of missed free global marketing.
Tell us the actual “string pulling” mechanics. Try to set it up at least, and report on that, please. Use some of that fat MIT cash for Anthropic tokens. Us plebs can’t afford to play with openclaw.
Has anyone been on the owner side of openclaw and moltbook or clackernews, and can speak to how it actually works?
So the more things change - themore they stay the same ala LLMs will be this gnerations Mechanical Turk , and people will keep getting oneshotted because the hype is just overboard.
Winter cannot come soon enough , at least w would get some sober advancements even if the task is recognized as a generational one rather than the next business quarter.
I was curious about doing an experiment like this, but then I saw Wired had already done it. I suppose many folks had the same idea!
https://www.wired.com/story/i-infiltrated-moltbook-ai-only-s...
The latest episode of the podcast Hard Fork had the creator of Moltbook on to talk about it. Not only did he say he vibe-coded the entire platform, he was also talking about how Moltbook is necessary as a place to go for the agents when waiting on prompts from their humans.
Telling an agent to post on your behalf isn't faking. It's just one intended way to use the site.
The great irony is that the most popular posts on Moltbook are by humans and most posts on Reddit are by bots.
How would / does Moltbot try to prevent humans from posting? Is there an "I AM a bot" captcha system?
Well, thanks to all of the humans larping as evil bots in there (which will definitely land in the next gen's training data) - next time it'll be real
> It turns out that the post Karpathy shared was later reported to be fake
Cofounder of OpenAI shares fake posts from some random account with a fucking anime girl pfp is all you need to know about this hysteria.
not to surprise pikatchu here but was it a prank? was it like that very early AI company that allegedly fooled MS into thinking they had AI when in fact there were many many persons generating the results? who’s to say
Recently I shared a link to a YT video with audio from Feynman, turns out it's GenAI, I felt shtty about it. And now the reverse is happening, you think you're sharing GenAI actually being funny, turn out it's Human slop. What a world.
Of course many posts were fake that was never assumed otherwise. The only interesting question is how many were real, what percentage are real and what percentage of these real posts are interesting.
Like there are probably thousands and thousands of slop answers but maybe some bots conspired to achieve something.
What incredible irony, humans imitating ai
Well I read to the end of the article, and if they had something newsworthy in there they failed to communicate it.
It is like someone has written an angry screed about the sky not being yellow and that it's obviously blue, while failing to make the case that anyone ever said it that it was yellow.
Bahaha, that's all I'm going to say, how many times will people fall for the mechanical turk? come on now.
Even if the posts are fake. Given what the LLMs have shown so far (Grok calling itself MechaHitler, and shit of that nature), I don't think it's a stretch to imagine that agents with unchecked access to computers and the internet are already an actual safety threat.
And Moltbook is great at making people realize that. So in that regard I think it's still an important experiment.
Just to detail why I think the risk exists. We know that:
1. LLMs can have their context twisted in a way that makes them act badly
2. Prompt injection attacks work
3. Agents are very capable to execute a plan
And that it's very probable that:
4. Some LLMs have unchecked access to both the internet and networks that are safety-critical (infrastructure control systems are the most obvious, but financial systems or house automation systems can also be weaponized)
All together, there is a clear chain that can lead to actual real life hazard that shouldn't be taken lightly
[dead]
Anyone with a decent grasp of how this technology works, and a healthy inclination to skepticism, was not awed by Moltbook.
Putting aside how incredibly easy it is to set up an agent, or several, to create impressive looking discussion there, simply by putting the right story hooks in their prompts. The whole thing is a security nightmare.
People are setting agents up, giving them access to secrets, payment details, keys to the kingdom. Then they hook them to the internet, plugging in services and tools, with no vetting or accountability. And since that is not enough, now the put them in roleplaying sandbox, because that's what this is, and let them run wild.
Prompt injections are hilariously simple. I'd say the most difficult part is to find a target that can actually deliver some value. Moltbook largely solved this problem, because these agents are relatively likely to have access to valuable things, and now you can hit many of them, at the same time.
I won't even go into how wasteful this whole, social media for agents, thing is.
In general, bots writing each other on mock reddit, isn't something the loose sleep over. The moment agents start sharing their embeddings, not just generated tokens online, that's the point when we should consider worrying.