logoalt Hacker News

Stay Away from My Trash

160 pointsby EvgeniyZhlast Tuesday at 6:53 AM65 commentsview on HN

Comments

anileatedtoday at 8:39 AM

"Just show me the prompt."

If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.

If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.

Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.

show 3 replies
wiseowisetoday at 8:57 AM

> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution. The thing is, my shitty AI issue was providing value.

Seems like shitty AI issue did more harm than good?

show 2 replies
ramon156today at 10:43 AM

> A few years ago I submitted a full TypeScript rewrite of a text editor because I thought it would be fun. I hope the maintainers didn't read it. Sorry.

Love the transparency. To be fair, rewrites are almost impossible to review. Anything >5k diff takes at least multiple review cycles. I don't know how some maintainers do it while also working on the codebase themselves

vanillameowtoday at 8:58 AM

> If writing the code is the easy part, why would I want someone else to write it?

Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.

show 3 replies
Havoctoday at 9:49 AM

>As a high-powered tech CEO, I'm

cough linkedin cringe cough

show 1 reply
827atoday at 8:30 PM

> The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?

Everything comes down to this. Its not just open source projects; companies are also slowly adjusting to this reality.

There's roughly two characteristics that humans need in this new environment: Long-ranging technical leadership about how the system should be built (Lead+ Software Engineer), and deep product knowledge about how its used (PM).

andaitoday at 8:30 AM

>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.

I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.

rezonanttoday at 9:21 AM

Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.

An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.

hypfertoday at 1:33 PM

Arguably, AI just accelerated a trend that was already happening and was already incorrect and unsustainable beforehand. The end of it just came a lot quicker.

The idea of pull requests by anyone everywhere at any time as the default was based on the assumption that we'd only ever encounter other hackers like us. For a time, public discourse acknowledged that this wasn't exactly true, but was very busy framing it as a good thing. Because something something new perspectives, viewpoints, whatever.

Some of that framing was actually true, of course, but often happened to exist in a vacuum, pretending that reality did not exist; downplaying (sometimes to the point of actual gaslighting) the many downsides that came with reduced friction.

Which leads us back to current day, where said reality got supercharged by AI and crashed their car (currently on fire) into your living room.

I feel like we could've not went to these extremes with a bit more modesty, honesty and time. But those values weren't really compatible with our culture in the last 15+ years.

Which leaves me wondering where we will find ourselves 15+ years from now.

netcantoday at 12:11 PM

I suppose this is banal/obvious to many, but I found this very interesting given the practical context.

>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.

This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.

>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?

...

>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.

The negative net value of external contributiona is to make the decision. End external contributions.

For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.

AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.

whywhywhywhytoday at 9:51 AM

> Authors would solve a problem in a way that ignored existing patterns

if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.

I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.

It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too

direwolf20today at 9:53 AM

You should never sign a CLA unless you're getting paid to.

show 1 reply
Cthulhu_today at 10:28 AM

> If writing the code is the easy part, why would I want someone else to write it?

Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).

show 1 reply
pdyctoday at 2:38 PM

tldraw can afford to use the latest models without worrying about AI costs, but many open source projects can’t. In those projects, maintainers often know the code best, just like tldraw, and would benefit more from AI credits than from external contributions. I hope something like that gets implemented.

show 1 reply
andaitoday at 8:36 AM

> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?

> ...

> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.

> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.

CivBasetoday at 1:33 PM

> If writing the code is the easy part, why would I want someone else to write it?

When was writing code ever the hard part?

If contributors aren't solving problems, what good are they? Code that doesn't solve a problem is cruft. And if a problem could be solved trivially, you probably wouldn't need contributions from others to solve it in the first place.

smusamashahtoday at 8:53 AM

We need a chrome extension like SponsorBlock, which publicly tags slop contributors. Maintainers can just reject PRs from those users.

dangustoday at 4:26 PM

IMO, you’re not really an open source project if you’re not accepting contributions with reasonably low friction.

I’ll call this what it is: a commercial product (they have a pricing page) that uses open source as marketing to sell more licenses.

The only PRs they want are ones that offer free professional level labor.

They’re too uncaring about the benefits of an open community to come up with a workflow to adapt to AI.

It honestly gives me a lack of confidence that they can maintain their own code quality standards with their own employees.

Think about it: when/if this company grows to a larger size, if they can’t handle AI slop from contributors how can they handle AI slop from a large employee base?

show 2 replies