Yes, surely this caching mechanism is undocumented and unexpected behavior?
Looking at the affected workflow I don't see any explicit caching so this is all "magically under the hood" by GitHub?
This looks like a FU on Github not TanStack (except for putting trust in Github in 2026 perhaps).
Yes, various footguns of pull_request_target are documented but I don't believe this is one of them? Github needs to own this OR just deprecate and remove pull_request_target alltogether.
From postmortem timeline: > 2026-05-11 11:29 Cache entry Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11 (1.1 GB) saved to GitHub Actions cache for TanStack/router, scope refs/heads/main — keyed to match what release.yml will look up on the next push to main
Why was that scoped refs/heads/main?
This is the exploited version of the exploited workflow. Why does the result of preinstall scripts run on PRs here end up on the main branch? Or did I overlook some critical part of Actions docs or the TanStack actions?
https://raw.githubusercontent.com/TanStack/router/d296252f73...
I take the above back. TanStack messed this up in the way they explicitly cache. This is run from the affected workflow: https://github.com/TanStack/config/blob/main/.github/setup/a...
The restore-key looks too wide and this still looks like an issue. This wide caching may also cause issue if they ever upgrade major nodejs version independently of OS, for example.