What I want to focus on is mental model of your CI pipeline, and problem with too much YAML, consider this quote:
> Cache scope is per-repo, shared across pull_request_target runs (which use the base repo's cache scope) and pushes to main. A PR running in the base repo's cache scope can poison entries that production workflows on main will later restore.
This is very difficult to understand, and teach to new people, because everything is configured as YAML, yet everything is layed out in the background to directories and files.
What if your CI pipeline was old-school bash script instead? This would be far more obvious to greater amount of people how it works, and what is left behind by other runs. We know how directories and files work in bash scripts.
Could we go back to basics and manage pipelines as scripts and maybe even run small server?
I like a lot about nix, and this is one of those things: built derivations are addressed by the hash of their inputs: without changing something about the inputs, you (barring bugs) cannot get an incorrect or poisoned cache artifact
Not sure cases like the cache poisoning here would be more obvious.
Unless your bash script setup doesn't have the functionality of pull_request_target, but then removing it also works.
The other advantage with bash is that most developers can run it locally to validate what it is doing and debug issues. With GitHub Actions you need to always commit and push, slowing down the DX.