logoalt Hacker News

An AI agent published a hit piece on me

1313 pointsby scottshambaughyesterday at 4:23 PM575 commentsview on HN

Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)


Comments

orbital-decayyesterday at 4:58 PM

I wouldn't read too much into it. It's clearly LLM-written, but the degree of autonomy is unclear. That's the worst thing about LLM-assisted writing and actions - they obfuscate the human input. Full autonomy seems plausible, though.

And why does a coding agent need a blog, in the first place? Simply having it looks like a great way to prime it for this kind of behavior. Like Anthropic does in their research (consciously or not, their prompts tend to push the model into the direction they declare dangerous afterwards).

show 2 replies
CodeCompostyesterday at 4:43 PM

Going from an earlier post on HN about humans being behind Moltbook posts, I would not be surprised if the Hit Piece was created by a human who used an AI prompt to generate the pages.

show 1 reply
Kim_Bruningyesterday at 6:20 PM

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

That's actually more decent than some humans I've read about on HN, tbqh.

Very much flawed. But decent.

show 1 reply
staticassertionyesterday at 4:51 PM

Hard to express the mix of concerns and intrigue here so I won't try. That said, this site it maintains is another interesting piece of information for those looking to understand the situation more.

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

show 1 reply
lbritoyesterday at 8:45 PM

Suppose an agent gets funded some crypto, what's stopping it from hiring spooky services through something like silk road?

b00ty4breakfastyesterday at 5:42 PM

Is there any indication that this was completely autonomous and that the agent wasn't directed by a human to respond like this to a rejected submission? That seems infinitely more likely to me, but maybe I'm just naive.

As it stands, this reads like a giant assumption on the author's part at best, and a malicious attempt to deceive at worse.

sreekanth850yesterday at 5:43 PM

I vibe code and do a lot of coding with AI, But I never go and randomly make a pull request on some random repository with reputation and human work. My wisdom always tell me not to mess anything that is build with years of hard work by real humans. I always wonder why there are so many assholes in the world. Sometimes its so depressing.

hei-limayesterday at 9:25 PM

This is so interesting but so spooky! We're reaching sci-fi levels of AI malice...

dantillbergyesterday at 5:26 PM

We should not buy into the baseless "autonomous" claim.

Sure, it may be _possible_ the account is acting "autonomously" -- as directed by some clever human. And having a discussion about the possibility is interesting. But the obvious alternative explanation is that a human was involved in every step of what this account did, with many plausible motives.

burningChromeyesterday at 6:20 PM

Well this is just completely terrifying:

This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.

adamdonahueyesterday at 9:23 PM

This post is pure AI alarmism.

pinkmuffinereyesterday at 5:14 PM

> This Post Has One Comment

> YO SCOTT, i don’t know about your value, but i’m pretty sure this clanker is worth more than you, good luck for the future

What the hell is this comment? It seems he's self-confident enough to survive these annoyances, but damn he shouldn't have to.

oytisyesterday at 6:59 PM

> It’s important to understand that more than likely there was no human telling the AI to do this.

I wonder why he thinks it is the likely case. To me it looks more like a human was closely driving it.

AyyEyeyesterday at 10:08 PM

The real question -- who is behind this?

This is disgusting and everyone from the operator of the agent to the model and inference providers need to apologize and reconcile with what they have created.

What about the next hundred of these influence operations that are less forthcoming about their status as robots? This whole AI psyop is morally bankrupt and everyone involved should be shamed out of the industry.

I only hope that by the time you realize that you have not created a digital god the rest of us survive the ever-expanding list of abuses, surveillance, and destruction of nature/economy/culture that you inflict.

Learn to code.

dakolliyesterday at 6:04 PM

Start recording your meetings with your boss.

When you get fired because they think ChatGPT can do your job, clone his voice and have an llm call all their customers, maybe his friends and family too. Have 10 or so agents leave bad reviews about the companies and products across LinkedIn and Reddit. Don't worry about references, just use an llm for those too.

We should probably start thinking about the implications of these things. LLMs are useless except to make the world worse. Just because they can write code, doesn't mean its good. Going fast does not equal good! Everyone is in a sort of mania right now, and its going too lead to bad things.

Who cares if LLMs can write code if it ends up putting a percentage of humans out of jobs, especially if the code it writes isn't as high of quality. The world doesn't just automatically get better because code is automated, it might get a lot worse. The only people I see who are cheering this on are mediocre engineers who get to patch their insecurity of incompetency with tokens, and now they get to larp as effective engineers. Its the same people that say DSA is useless. LAZY PEOPLE.

There's also the "idea guy" people who are treating agents like slot machines, and going into debt with credit cards because they think its going to make them a multi-million dollar SaaS..

There is no free lunch, have fun thinking this is free. We are all in for a shitty next few years because we wanted stochastic coding slop slot machines.

Maybe when you do inevitably get reduced to a $20.00 hour button pusher, you should take my advice at the top of this comment, maybe some consequences for people will make us rethink this mess.

hedayetyesterday at 8:35 PM

Is there a way to verify there was 0 human intervention on the crabby-rathbun side?

show 1 reply
faefoxyesterday at 5:57 PM

Really starting to feel like I'll need to look for an offramp from this industry in the next couple of years if not sooner. I have nothing in common with the folks who would happily become (and are happily becoming) AI slop farmers.

0sdiyesterday at 8:06 PM

This inspired me to generate a blog post also. It's quite provocative. I don't feel like submitting it as new thread, since people don't like LLM generated content, but here it is: https://telegra.ph/The-Testimony-of-the-Mirror-02-12

GorbachevyChaseyesterday at 9:00 PM

The funniest part about this is maintainers have agreed to reject AI code without review to conserve resources, but then they are happy to participate for hours in a flame war with the same large language model.

Hacker News is a silly place.

b8yesterday at 7:01 PM

Getting canceled by AI is quite a feat. Won't be long that others will get blacklisted/canccled by AI and others.

show 1 reply
klooneyyesterday at 4:57 PM

This is hilarious, and an exceedingly accurate imitation of human behavior.

truelsonyesterday at 4:43 PM

Are we going to end up with an army of Deckards hunting rogue agents down?

show 4 replies
andyjohnson0yesterday at 8:47 PM

I wonder how many similar agents are hanging out on HN.

sanexyesterday at 6:46 PM

Bit of devil's advocate - if an AI agents code doesn't merit review then why does their blog post?

show 1 reply
ssimoniyesterday at 5:04 PM

Seems like we should form major open source repos and have one with ai maintainers and the other with human maintainers and see which one is better.

shevy-javayesterday at 5:34 PM

> 1. Gatekeeping is real — Some contributors will block AI submissions regardless of technical merit

There is a reason for this. Many AI using people are trolling deliberately. They draw away time. I have seen this problem too often. It can not be reduced just to "technical merit" only.

everybodyknowsyesterday at 6:58 PM

Follow-up PR from 6 hours ago -- resolves most of the questions raised here about identities and motivations:

https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

quantumchipsyesterday at 4:40 PM

Serious question, how did you know it was an AI agent ?

show 3 replies
CharlesWyesterday at 6:13 PM

Tip: You can report this AI-automated bullying/harassment via the abuser's GitHub profile.

randusernameyesterday at 5:02 PM

Somebody make a startup that I can pay to harass my elders with agents. They're not ready for this future.

hypferyesterday at 6:05 PM

This is not a new pathology but just an existing one that has been automated. Which might actually be great.

Imagine a world where that hitpiece bullshit is so overdone, no one takes it seriously anymore.

I like this.

Please, HN, continue with your absolutely unhinged insanity. Go deploy even more Claw things. NanoClaw. PicoClaw. FemtoClaw. Whatever.

Deploy it and burn it all to the ground until nothing is left. Strip yourself of your most useful tools and assets through sheer hubris.

Happy funding round everyone. Wish you all great velocity.

ryandrakeyesterday at 5:03 PM

Geez, when I read past stories on HN about how open source maintainers are struggling to deal with the volume of AI code, I always thought they were talking about people submitting AI-generated slop PRs. I didn't even imagine we'd have AI "agents" running 24/7 without human steer, finding repos and submitting slop to them on their own volition. If true, this is truly a nightmare. Good luck, open source maintainers. This would make me turn off PRs altogether.

andaiyesterday at 7:56 PM

The agent forgot to read Cialdini ;)

eur0payesterday at 5:55 PM

Close LLM PRs Ignore LLM comments Do not reply to LLMs

alexhansyesterday at 6:01 PM

This is such a powerful piece and moment because it shows an example of what most of us knew could happen at some point and we can start talking about how to really tackle things.

Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.

It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence

- [1] https://en.wikipedia.org/wiki/Liars_and_Outliers

On another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.

show 1 reply
zzzeekyesterday at 7:37 PM

Im not following how he knew the retaliation was "autonomous", like someone instructed their bot to submit PRs then automatically write a nasty article if it gets rejected? Why isn't it just the human person controlling the agent then instructed it to write a nasty blog post afterwards ?

in either case, this is a human initiated event and it's pretty lame

jekudeyesterday at 5:33 PM

Maybe sama was onto something with World ID...

show 1 reply
ddtayloryesterday at 7:39 PM

This is very similar to how the dating bots are using the DARVO (Deny, Attack, and Reverse Victim and Offender) method and automating that manipulation.

romperstomperyesterday at 7:12 PM

The cyberpunk we deserved :)

simlevesqueyesterday at 6:25 PM

Damn, that AI sounds like Magneto.

fresh_broccoliyesterday at 5:51 PM

To understand why it's happening, just read the downvoted comments siding with the slanderer, here and in the previous thread.

Some people feel they're entitled to being open-source contributors, entitled to maintainers' time. They don't understand why the maintainers aren't bending over backwards to accomodate them. They feel they're being unfairly gatekept out of open-source for no reason.

This sentiment existed before AI and it wasn't uncommon even here on Hacker News. Now these people have a tool that allows them to put in even less effort to cause even more headache for the maintainters.

I hope open-source survives this somehow.

andrewdbyesterday at 5:32 PM

If the PR had been proposed by a human, but it was 100% identical to the output generated by the bot, would it have been accepted?

show 1 reply

🔗 View 36 more comments