Just to be contrarian, perhaps some measure of risk is reduced by the scale of one.
Identifying a vulnerability that can be exploited against many thousands or millions of targets is perhaps more attractive than a single one of individually low value.
This of course would assume that vulnerabilities are in fact unique (which is admittedly questionable).
I had the exact same thought. Pretty low probability that there's going to be a script-kiddie exploit for your custom tools. Pretty decent probability that there will be vulnerabilities present if someone cares enough to target you.
> This of course would assume that vulnerabilities are in fact unique (which is admittedly questionable).
Yeah, I don't think all that generated software will be as unique as people expect.
Considering it will be generated with the same LLMs that all share roughly the same training data we will se patterns of vulnerabilities will also be similar and so easily exploitable.
We should expect the same automated personalization to be used offensively and for that personalization to be packaged into tools anyone can run (natural language interface, likely.)
(Appreciate your counterpoint for its own sake. It’s an interesting idea.)
If a vulnerability of the common not individualized ancestor software is found, how quickly do people patch their individual versions of the software?
To take this further, don't LLMs justify lowering the "barrier to attention"; i.e., if it only takes Claude's and not the hacker's eyeballs on the software, won't people find vulnerabilities in custom software for one too?
Besides that, one could easily imagine software created for similar purposes ("make me a file editor") by the same tool or handful thereof (claude and a very small "etc" for completeness) might share similar vulnerabilities, so this kind of broad net might be even cheaper to cast than one might imagine at first.