logoalt Hacker News

vultouryesterday at 9:42 PM5 repliesview on HN

Are you ashamed of other people finding out you used Claude? I think the co-authored-by bit should not be a setting at all, AI-generated code should be clearly identified.


Replies

isityettimetoday at 3:53 AM

I use Claude at work. I've never instructed it to make a commit, and it's never attempted to make one. It would fail anyway because my commits are signed by Yubikey and it requires presence detection, so I have to tap it.

But I don't want it to make commits, and I don't want to review its code in the Claude Code TUI, either. I want to read its changes in my text editor, decide what to drop or revise or revert, and then stage individual hunks or regions into logical commits.

If anyone asks I'll tell them I used an LLM, idc. I often mention it in commit messages or PRs. But I don't want LLM agents to write commits at all.

sieveyesterday at 9:53 PM

> AI-generated code should be clearly identified.

Let AI autonomously produce code of a quality that I care about and I might consider giving it credit. I don't know how other people write code but I come up with an idea and use a multitude of LLMs to brainstorm a reasonably comprehensive spec that any reasonably competent person can read and produce a working program from, including a locally working Q2 quant of Qwen 3.6. Even Kimi is as good as Claude at most coding tasks, and I don't see why any single agent deserves any credit for my design.

Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.

show 1 reply
NateEagyesterday at 9:50 PM

I think the Linux kernel's standard of disclosure via the "Assisted-By" trailer is the right move.

Makes it clear you used a bullshit machine, without implying it's an author.

...assuming you think using them at all is a good move - I won't deny they have some utility (though I'd argue much lower than many seem to think), but I do presently believe they're a disaster for humanity.

The ruination of the Internet with slop, the massive propagation of propaganda, and the insanely easy-to-wield tools for abuse are in no way worth the ability to accrue tech debt at 10x velocity (though to be clear, accruing tech debt can absolutely be a useful strategy, if one I personally dislike).

dangusyesterday at 9:50 PM

Basically what you’re saying is that if AI does anything on your computer, anything the AI impacts you should lose control over. If the AI touched it at all in any way, big or small, you now lose ownership of the actions your computer takes (on open source tools, I might add).

In case you need reminding of common sense, I’m supposed to be allowed to decide what my commit messages are because it’s my fucking computer.

I prefer that my software is not a morality police.

bdangubicyesterday at 9:47 PM

mind-boggling people are trying to hide this, tells you all you need to know about our “profession.” presence of that hook or the like in a place of business should be fireable offense