Many of the suggestions in this thread (min-release, ignore script) are defenses for the consumers.
I've been working on Proof of Resilience, a set of 4 metrics for OSS, and using that as a scoring oracle for what to fund.
Popularity metrics like downloads, stars, etc are easy to fake today with ai agents. An interesting property is that gaming these metrics produces better code, not worse.
These are the 4 metrics:
1. Build determinism - does the published artifact match a reproducible build from source?
2. Fuzzing survival - does the package survive fuzz testing?
3. Downstream stability - does it break any repos dependent on this project when pushing a release?
4. Patch velocity - how fast are fixes merged?
Here's a link to the post, still early but would appreciate any feedback.
Carl, with all due respect, have you used AI for making this hackmd post?
"it's not just a waste of money — it's a security problem"
I am really passionate about these things, but I am not going to read something which you haven't written. Even sharing a prompt/rough-sketches/raw-writing might be beneficial but I recommend writing it by-hand man, we are all burnt out reading AI slop, I can't read more AI