The reasoning generally isn't kept in the context, so after choosing the secret word in the first reasoning block, the LLM will have completely forgotten it in the second and subsequent requests.
So, it technically didn't change the secret word so much as it was trying to infer what its own secret word might have been, based on your guesses.
Exactly. The following will work, assuming you're using a model and frontend that supports it:
> Let's play hangman. Just pick a 3 letter word for now, I want to make sure this works. Pick the secret word up front and make sure to write the secret word and game state in a file that you'll have access to for the rest of the session, since you won't remember what word you chose otherwise.
This was Opus 4.6 in Claude desktop, fwiw.
Note: I didn't bother experimenting with whether it worked without me explicitly telling it that it should record the game state to a file.