pre-ai if I had to include Google search queries in a commit, I’d be so embarrassed I’d probably never commit code like ever
> AI writes code
you mean plagiarism?
instead of committing code, we should just save videos of all of the zoom meetings about the code
If the full session capture is not encoded s.t it provides insight into architecture/mistakes, what was the point? There needs to be 1. complete capture (all tool calls etc) as well as 2. which is also curated to be readable (collapsible, chronological, easy to navigate etc). A .txt dump of agent COT is not particularly useful to anyone aside from another agent.
hell to the no, in between coding sessions, I go out on plenty of sidebars about random topics that help me, the prompter understand the problem more. Prompts in this way are entirely related to context (pre-knowledge) that is not available to the LLMs.
isn't a similar thing done by entire cli? the startup which raised $60M seed recently
Mostly that’s going to be noise. But in some rare occasion I could see it being useful. So my unhelpful notion is that we might need a new thing - leave the commit message as a meaning- dense human-to-human message, and also have a development process flight-recorder log stored alongside. Storage is basically free so why not?
In general, no, but sometimes, yes, or at least linked from the commit the same way user stories/issues are. Admittedly the 'sometimes' from my perspective is mostly when there's a need to educate fellow humans about what's possible or about good prompt techniques and workarounds for the AI being dumb. It can also reveal more of x% by AI, y% by human by for example diffing the outputs from the session against the final commits.
I just cannot for the life of me understand the problem that this is solving. The only way that makes any sense is if sessions are atomic along with commits. If a session results in many commits in this becomes a fundamentally incomplete record, such as it was a record at all. Even if we do restrict to one session per commit, we are not in control over the agent’s context—-the session details will contain the user prompting the actions and the reasoning summaries. It will not contain a crucial part, which is how the agent assembles information about the project. So you’re left with a record that looks very complete and is silently incomplete. I don’t understand what the benefit of retaining that is.
No. Make me.
No
I’ve had the same thought, but after playing around with it, it just seems like adding noise. I never find myself looking at generated code and wondering “what prompt lead to that?” There’s no point, I won’t get any kind of useful response - I’m better off talking to the developer who committed it, that’s how code review works.
"Pulp Project Policy on AI Generated Content / AI Assisted Coding" https://github.com/pulp#pulp-project-policy-on-ai-generated-... :
> [...]
> All contributors must indicate in the commit message of their contribution if they used AI to create them and the contributor is fully responsible for the content that they submit.
> This can be a label such as `Assisted By: <Tool>` or `Generated by: <Tool>` based on what was used. This label should be representative of the contribution and how it was created for full transparency. The commit message must also be clear about how it is solving a problem/making an improvement if it is not immediately obvious.*
From "Entire: Open-source tool that pairs agent context to Git commits" (2026) https://news.ycombinator.com/item?id=46964096 :
> But which metadata is better stored in git notes than in a commit message? JSON-LD can be integrated with JSON-LD SBOM metadata
Nope. Especially with these agents the thinking trace can get very large. No human will ever read it, and the agent will fill up their context with garbage trying to look for information.
I understand the drive for stabilizing control and consistency, but this ain't the way.
Proof sketch is not proof
I keep trunk of conversation internally. No way I am putting it on github. The way I think, plan, interrogate LLM is part of competitive advantage in the market. I consider it my property and I would not ever let my clients read it (I pay for my usage of AI). Never mind some juicy language and being super straight and apolitical in a corporate sense. basically would be a major privacy breach
I agree so much
In principle, the documentation that's included in the code edit should have all the relevant information that a future agent would need.
No. Prompt-like document is enough. (e.g. skills, AGENTS.md)
This would just record a lot of me cursing at and calling the AI an idiot.
Like any discussion about AI there are two things people are talking about here and it's not always clear which:
1. Using LLMs as a tool but still very much crafting the software "by hand",
2. Just prompting LLMs, not reading or understanding the source code and just running the software to verify the output.
A lot of comments here seem to be thinking of 1. But I'm pretty sure the OP is thinking of 2.
Yes.
EOM
I feel like publishing the session is like publishing a sketch book. I don't need all of my mistakes and dumb questions recorded.
If that was important, why are we not already doing things like this. Should I have always been putting my browser history in commits?
I include my "plans" and a link to my transcript on all my PRs that include AI-generated code. If nothing else, others on my team can learn from them.
I've thought about this, and I do save the sessions for educational purposes. But what I ended up doing is exactly what I ask developers to do: update the bug report with the analysis, plan, notes etc. In the case there's a single PR fixing one bug, GitHub and Claude tend to prefer this information go in the PR description. That's ok for me since it's one click from the bug.
I must say that would certainly show some funny converstaions in a log.
Maybe Git isn't the right tool to track the sessions. Some kind of new Semi-Human Intelligence Tracking tool. It will need a clever and shorter name though.
obligatory: git notes
Lots of comments mentioned this, for those who aren't aware, please checkout
Git Notes: Git's coolest, most unloved feature (2022)
https://news.ycombinator.com/item?id=44345334
I think it's a perfect match for this case.
nope. Someones going to leak important private data using something like this.
Consider:
"I got a bug report from this user:
... bunch of user PII ..."
The LLM will do the right thing with the code, the developer reviewed the code and didn't see any mention of the original user or bug report data.
Now the notes thing they forgot about goes and makes this all public.
no and neither should be the actual code. you should at least remove the excessive bs that the ai comments and autisms about
[dead]
[dead]
[dead]
[dead]
A summary of the session should be part of the commit message.