> Are you assuming every piece of software has a dedicated defender team? Strikes me as unlikely.
No, I'm assuming it has maintainers (they play the role of defenders).
> engineers who work on software are simply not that great and dedicated about finding vulnerabilities in it.
Yes, but LLMs help them more than they help the attackers, because the attackers are already security experts. In other words, the LLMs reduce the skill gap rather than increase it. Becoming good at using AI is much easier than becoming good at security.
> I'm assuming it has maintainers (they play the role of defenders).
A maintainer has a full-time job: to develop software. A maintainer who is also a defender has two full-time jobs, and as we all know in such a case one of these jobs will have to be done poorly, and we all know which one that is.
On the other side there’s an attacker with a singular job and a strong incentive to do it well.
> LLMs help them more than they help the attackers, because the attackers are already security experts.
The supposed logic is that an LLM multiplies your skill. If the multiplier is 5, and your attacking skill is 1 before the multiplication, then you get 5 after; if your attacking skill is alreaady at 10, you get 50. You could argue that LLMs are not good enough to act as multipliers, and then my math won’t work.