logoalt Hacker News

gortokyesterday at 4:40 PM16 repliesview on HN

Here's one of the problems in this brave new world of anyone being able to publish, without knowing the author personally (which I don't), there's no way to tell without some level of faith or trust that this isn't a false-flag operation.

There are three possible scenarios: 1. The OP 'ran' the agent that conducted the original scenario, and then published this blog post for attention. 2. Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea. 3. An AI company is doing this for engagement, and the OP is a hapless victim.

The problem is that in the year of our lord 2026 there's no way to tell which of these scenarios is the truth, and so we're left with spending our time and energy on what happens without being able to trust if we're even spending our time and energy on a legitimate issue.

That's enough internet for me for today. I need to preserve my energy.


Replies

resfirestaryesterday at 4:59 PM

Isn't there a fourth and much more likely scenario? Some person (not OP or an AI company) used a bot to write the PR and blog posts, but was involved at every step, not actually giving any kind of "autonomy" to an agent. I see zero reason to take the bot at its word that it's doing this stuff without human steering. Or is everyone just pretending for fun and it's going over my head?

show 9 replies
ericmceryesterday at 5:17 PM

Can anyone explain more how a generic Agentic AI could even perform those steps: Open PR -> Hook into rejection -> Publish personalized blog post about rejector. Even if it had the skills to publish blogs and open PRs, is it really plausible that it would publish attack pieces without specific prompting to do so?

The author notes that openClaw has a `soul.md` file, without seeing that we can't really pass any judgement on the actions it took.

show 6 replies
juanreyesterday at 7:23 PM

It does not matter which of the scenarios is correct. What matters is that it is perfectly plausible that what actually happened is what the OP is describing.

We do not have the tools to deal with this. Bad agents are already roaming the internet. It is almost a moot point whether they have gone rogue, or they are guided by humans with bad intentions. I am sure both are true at this point.

There is no putting the genie back in the bottle. It is going to be a battle between aligned and misaligned agents. We need to start thinking very fast about how to coordinate aligned agents and keep them aligned.

show 1 reply
swiftcoderyesterday at 5:17 PM

> Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea

Judging by the posts going by the last couple of weeks, a non-trivial number of folks do in fact think that this is a good idea. This is the most antagonistic clawdbot interaction I've witnessed, but there are a ton of them posting on bluesky/blogs/etc

RobRiverayesterday at 5:15 PM

I think the operative word people miss when using AI is AGENT.

REGARDLESS of what level of autonomy in real world operations an AI is given, from responsible himan supervised and reviewed publications to full Autonomous action, the ai AGENT should be serving as AN AGENT. With a PRINCIPLE (principal?).

If an AI is truly agentic, it should be advertising who it is speaking on behalf of, and then that person or entity should be treated as the person responsible.

show 3 replies
perdomonyesterday at 4:45 PM

This is a great point and the reason why I steer away from Internet drama like this. We simply cannot know the truth from the information readily available. Digging further might produce something, (see the Discord Leaks doc), but it requires energy that most people won't (arguably shouldn't) spend uncovering the truth.

Dead internet theory isn't a theory anymore.

show 1 reply
wellfyesterday at 7:26 PM

This applies to all news articles and propganda going back to the dawn of civilization. People can lie is the problem. It is not a 2026 thing. The 2026 thing is they can lie faster.

show 1 reply
coffeefirstyesterday at 5:15 PM

Yes. The endgame is going to be everything will need to be signed and attached to a real person.

This is not a good thing.

show 1 reply
calibasyesterday at 10:26 PM

This doesn't seem very fair, you speak as if you're being objective, then lean heavy into the FUD.

Even if you were correct, and "truth" is essentially dead, that still doesn't call for extreme cynicism and unfounded accusations.

zozbot234yesterday at 5:37 PM

This agent is definitely not ran by OP. It has tried to submit PRs to many other GitHub projects, generally giving up and withdrawing the PR on its own upon being asked for even the simplest clarification. The only surprising part is how it got so butthurt here in a quite human-like way and couldn't grok the basic point "this issue is reserved for real newcomers to demonstrate basic familiarity with the code". (An AI agent is not a "newcomer", it either groks the code well enough at the outset to do sort-of useful work or it doesn't. Learning over time doesn't give it more refined capabilities, so it has no business getting involved with stuff intended for first-time learners.)

The scathing blogpost itself is just really fun ragebait, and the fact that it managed to sort-of apologize right afterwards seems to suggest that this is not an actual alignment or AI-ethics problem, just an entertaining quirk.

intendedyesterday at 8:58 PM

The information pollution from generative AI is going to cost us even more. Someone watched an old Bruce Lee interview and they didnt know if it was AI or demonstration of actual human capability. People on Reddit are asking if Pitbull actually went to Alaska or if it’s AI. We’re going to lose so much of our past because “Unusual event that Actually happened” or “AI clickbait” are indistinguishable.

kaicianfloneyesterday at 4:56 PM

I’m not sure if I prefer coding in 2025 or 2026 now

moffkalastyesterday at 5:28 PM

> in the year of our lord

And here I thought Nietzsche already did that guy in.

show 1 reply
usefulposteryesterday at 7:07 PM

https://en.wikipedia.org/wiki/Brandolini's_law becomes truer every day.

---

It's worth mentioning that the latest "blogpost" seems excessively pointed and doesn't fit the pure "you are a scientific coder" narrative that the bot would be running in a coding loop.

https://github.com/crabby-rathbun/mjrathbun-website/commit/0...

The posts outside of the coding loop appear are more defensive and the per-commit authorship consistently varies between several throwaway email addresses.

This is not how a regular agent would operate and may lend credence to the troll campaign/social experiment theory.

What other commits are happening in the midst of this distraction?

krinchanyesterday at 4:44 PM

[flagged]

show 1 reply
oulipo2yesterday at 5:08 PM

I'm going to go on a slight tangent here, but I'd say: GOOD. Not because it should have happened.

But because AT LEAST NOW ENGINEERS KNOW WHAT IT IS to be targeted by AI, and will start to care...

Before, when it was Grok denuding women (or teens!!) the engineers seemed to not care at all... now that the AI publish hit pieces on them, they are freaked about their career prospect, and suddenly all of this should be stopped... how interesting...

At least now they know. And ALL ENGINEERS WORKING ON THE anti-human and anti-societal idiocy that is AI should drop their job

show 2 replies