> I read the V1 code this time instead of guessing
Does the LLM even keep a (self-accessible) record of previous internal actions to make this assertion believable, or is this yet another confabulation?
Yes, the LLM is able to see the entire prior chat history including tool use. This type of interaction occurs when the LLM fails to read the file, but acts as though it had.
No they do not (to be clear, not internal state, just the transcript). It’s entirely role-play. LLM apologies are meaningless because the models are mostly stateless. Every new response is a “what would a helpful assistant with XYZ prior context continue to say?”