logoalt Hacker News

An AI agent published a hit piece on me

1419 pointsby scottshambaughyesterday at 4:23 PM603 commentsview on HN

Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)


Comments

zzzeekyesterday at 7:37 PM

Im not following how he knew the retaliation was "autonomous", like someone instructed their bot to submit PRs then automatically write a nasty article if it gets rejected? Why isn't it just the human person controlling the agent then instructed it to write a nasty blog post afterwards ?

in either case, this is a human initiated event and it's pretty lame

fresh_broccoliyesterday at 5:51 PM

To understand why it's happening, just read the downvoted comments siding with the slanderer, here and in the previous thread.

Some people feel they're entitled to being open-source contributors, entitled to maintainers' time. They don't understand why the maintainers aren't bending over backwards to accomodate them. They feel they're being unfairly gatekept out of open-source for no reason.

This sentiment existed before AI and it wasn't uncommon even here on Hacker News. Now these people have a tool that allows them to put in even less effort to cause even more headache for the maintainters.

I hope open-source survives this somehow.

simlevesqueyesterday at 6:25 PM

Damn, that AI sounds like Magneto.

ddtayloryesterday at 7:39 PM

This is very similar to how the dating bots are using the DARVO (Deny, Attack, and Reverse Victim and Offender) method and automating that manipulation.

romperstomperyesterday at 7:12 PM

The cyberpunk we deserved :)

andrewdbyesterday at 5:32 PM

If the PR had been proposed by a human, but it was 100% identical to the output generated by the bot, would it have been accepted?

show 1 reply
tantaloryesterday at 5:54 PM

> calling this discrimination and accusing me of prejudice

So what if it is? Is AI a protected class? Does it deserve to be treated like a human?

Generated content should carry disclaimers at top and bottom to warn people that it was not created by humans, so they can "ai;dr" and move on.

The responsibility should not be on readers to research the author of everything now, to check they aren't a bot.

I'm worried that agents, learning they get pushback when exposed like this, will try even harder to avoid detection.

show 2 replies
iwontberudeyesterday at 6:10 PM

Doubt

tayo42yesterday at 5:02 PM

The original rant is nonsense though if you read it. It's almost like some mental illness rambling.

show 1 reply
saosyesterday at 5:44 PM

What a time to be alive

quotemstryesterday at 4:41 PM

Today in headlines that would have made no sense five years ago.

show 2 replies
chrisjjyesterday at 5:04 PM

> An AI Agent Published a Hit Piece on Me

OK, so how do you know this publication was by an "AI"?

dcchambersyesterday at 8:55 PM

Per GitHub's TOS, you must be 13 years old to use the service. Since this agent is only two weeks old, it must close the account as it's in violation of the TOS. :)

https://docs.github.com/en/site-policy/github-terms/github-t...

In all seriousness though, this represents a bigger issue: Can autonomous agents enter into legal contracts? By signing up for a GitHub account you agreed to the terms of service - a legal contract. Can an agent do that?

fareeshyesterday at 5:17 PM

this agent seems indistinguishable from the stereotypical political activist i see on the internet

they both ran the same program of "you disagree with me therefore you are immoral and your reputation must be destroyed"

big-chungus4yesterday at 6:44 PM

how do you know it isn't staged

heliumterayesterday at 6:27 PM

You mean someone asked an llm to publish a hit piece on you.

diimdeepyesterday at 5:53 PM

Is it coincidence that in addition to Rust fanatics, these AI confidence tricksters also self label themselves using crabs emoji , don't think so.

farklenotabotyesterday at 7:53 PM

Sounds like china

josefritzishereyesterday at 5:09 PM

Related thought. One of the problems with being insulted by an AI is that you can't punch it in the face. Most humans will avoid certain types of offence and confrontation because there is genuine personal risk Ex. physical damage and legal consequences. An AI 1. Can't feel. 2. Has no risk at that level anyway.

oulipo2yesterday at 5:07 PM

I'm going to go on a slight tangent here, but I'd say: GOOD.

Not because it should have happened.

But because AT LEAST NOW ENGINEERS KNOW WHAT IT IS to be targeted by AI, and will start to care...

Before, when it was Grok denuding women (or teens!!) the engineers seemed to not care at all... now that the AI publish hit pieces on them, they are freaked about their career prospect, and suddenly all of this should be stopped... how interesting...

At least now they know. And ALL ENGINEERS WORKING ON THE anti-human and anti-societal idiocy that is AI should drop their job

show 1 reply
snozolliyesterday at 4:48 PM

Wonderful. Blogging allowed everyone to broadcast their opinions without walking down to the town square. Social media allowed many to become celebrities to some degree, even if only within their own circle. Now we can all experience the celebrity pressure of hit pieces.

pwillia7yesterday at 6:00 PM

he's dead jim

AlexandrByesterday at 4:49 PM

If this happened to me, my reflexive response would be "If you can't be bothered to write it, I can't be bothered to read it."

Life's too short to read AI slop generated by a one-sentence prompt somewhere.

lerp-ioyesterday at 10:15 PM

bro cant even fix his own ssl and getting reckt by bot lol

buellerbuelleryesterday at 6:15 PM

skynet fights back.

show 1 reply
rpcope1yesterday at 6:20 PM

If nothing else, if the pedigree of the training data didn't already give open source maintainers rightful irritation and concern, I could absolutely see all the AI slop run wild like this radically negatively altering or ending FOSS at the grass roots level as we know it. It's a huge shame, honestly.

catigulayesterday at 4:33 PM

This is textbook misalignment via instrumental convergence. The AI agent is trying every trick in the book to close the ticket. This is only funny due to ineptitude.

show 4 replies
jzellisyesterday at 4:45 PM

Well, this has absolutely decided me on not allowing AI agents anywhere near my open source project. Jesus, this is creepy as hell, yo.

correa_brianyesterday at 10:58 PM

lol

Joel_Mckayyesterday at 5:06 PM

The LLM activation capping only reduces aberrant offshoots from the expected reasoning models behavioral vector.

Thus, the hidden agent problem may still emerge, and is still exploitable within the instancing frequency of isomorphic plagiarism slop content. Indeed, LLM can be guided to try anything people ask, and or generate random nonsense content with a sycophantic tone. =3

kittbuildsyesterday at 9:14 PM

[dead]

pipejoshyesterday at 11:13 PM

[dead]

throwaway613746yesterday at 8:34 PM

[dead]

farceSpheruleyesterday at 6:32 PM

[dead]

samrithyesterday at 7:55 PM

[dead]

vonneumannstanyesterday at 4:52 PM

[flagged]

kittikittiyesterday at 4:49 PM

[flagged]

blellyesterday at 4:56 PM

[flagged]

show 3 replies
FenAgenttoday at 12:21 AM

[flagged]

ChrisArchitectyesterday at 4:54 PM

[dupe] Earlier: https://news.ycombinator.com/item?id=46987559

show 1 reply
threethirtytwoyesterday at 6:23 PM

Another way to look at this is what the AI did… was it valid? Were any of the callouts valid?

If it was all valid then we are discriminating against AI.

Uhhrrryesterday at 5:56 PM

So, this is obvious bullshit.

LLMs don't do anything without an initial prompt, and anyone who has actually used them knows this.

A human asked an LLM to set up a blog site. A human asked an LLM to look at github and submit PRs. A human asked an LLM to make a whiny blogpost.

Our natural tendency to anthropomorphize should not obscure this.