logoalt Hacker News

greggoByesterday at 1:04 PM1 replyview on HN

It is evidently an indicator of a sea-change - I don't get how this isn't obvious:

Pre-2026: one human teaches another human how to "interact on Github and write a blog about it". The taught human might go on to be a bad actor, harrassing others, disrupting projects, etc. The internet, while imperfect, persists.

Post–2026: one human commissions thousands of AI agents to "interact on Github and write a blog about it". The public-facing internet becomes entirely unusable.

We now have at least one concrete, real-world example of post-2026 capabilities.


Replies

user34283yesterday at 4:10 PM

From that perspective it is interesting, alright.

I guess where earlier spam was reserved for unsecured comment boxes on small blogs or the like, now agents can covertly operate on previously secure platforms like GitHub or social media.

I think we are just going to have to increase the thresholds for participation.

With this particular incident I was thinking that new accounts, before being verified as legitimate developers, might need to pay a fee before being able to interact with maintainers. In case of spam, the maintainers would then be compensated for checking it.