> There is always a significant chance all of it is leaked sooner or later.
As an adversarial/worst-case model, it can be useful to think of every service as potentially storing forever all the data that you ever give it access to. As a practical matter, services have terms of service that they follow. If your Claude Code terms say that your data will not be used for training, you can be reasonably confident that they will not be, and storing the raw inputs forever (as suggested by “significant chance all of it is leaked sooner or later”) would be even more unlikely. (For example, Google has entire teams dedicated to compliance with users' “wipeout” settings. You can take a look at https://myactivity.google.com and https://myadcenter.google.com to see some of what Google knows and thinks about you, and if you've chosen "Auto-Delete after 3 months" or whatever, you can be very sure it will be gone after that time. Every single team that stores user data is required to comply with this.)
I do think the services make it harder than it should be, to find out what the terms are — for a given usage of their services whether and for how long the details will be stored by them. Just saying that you can find this out and generally rely on it at least at the time (at a reasonable threat model, e.g. not treating the service as a malicious adversary having a giant law-breaking conspiracy that has never been exposed).