logoalt Hacker News

banannaiseyesterday at 3:22 PM4 repliesview on HN

The ticket has subtle errors in its description that are only caught by someone experienced with the codebase.

The code hides an exception behind an if-then-else that defaults to the most common state, which isn't caught until it breaks things for the 1% of users who don't have that state.

The new feature quietly breaks a feature not covered by the acceptance tests.

The documentation is four times as long and nobody who relies on it can read it.

And I'm stuck spending my time going over tickets with a fine-toothed comb, reviewing PRs, and mentoring contributors to prevent all of this garbage from ending up in the live code.


Replies

elflyyesterday at 6:28 PM

I will give you 4.

1, 2 and 3 happened a ton in the good old times before AI. If anything, we can make the code be more tested than before, but that requires a lot more engineering, that is made easier by LLMs.

It's just we haven't adapted to do them.

chorsestudiosyesterday at 4:00 PM

People noted similar issues ever since LLMs came out, but the rate at which they have been rapidly improving on all of these is significant. Documentation being 4x too long could probably be fixed with a rule instructing the agent to keep it concise and no longer than 2-3 paragraphs.

show 2 replies
glialyesterday at 8:49 PM

Definitely, but the first 3 issues are also created by human co-workers.

oliveralbertiniyesterday at 8:05 PM

Do you use Microsoft Copilot ?