> exploiting software is someone’s full-time job, whereas the engineers already have one—building it.
But the attackers needs to spread their attack over many products, while the engineers only need to defend one.
> The newer factor is attackers working for nation-states, being protected by them, and potentially having figurative guns to their heads or at least livelihoods depending on the amount of damage they can deal; the lack of equivalent pressure on the developer’s side leads me to adjust it to A = D × 10.
Except that's true even without LLMs. LLMs improve both sides' capabilities by the same factor (at least hypothetically).
> Additionally, let’s multiply that by a variable DS/AS that reflects developer’s/attacker’s skill at using LLMs in such particular ways that find the most serious vulnerabilities. As a random guess, let’s say AS = DS × 5, as the attacker would have been exclusively using LLMs for this purpose.
I'm not sure that's right, because once attackers develop some skill, that skill could spread to all defenders through tools with the skill built into them. So again, we can remove the "LLM factor" from both sides of the equation. If anything, security skills can spread more easily to defenders with LLM because without LLMs, the security skill of the attackers require more effort to develop.
> > exploiting software is someone’s full-time job, whereas the engineers already have one—building it.
> But the attackers needs to spread their attack over many products, while the engineers only need to defend one.
Are you assuming every piece of software has a dedicated defender team? Strikes me as unlikely.
Realistically, you have people whose job or passion is to develop software, who often work not on one but on N projects at the same time (especially in OSS), and who definitely aren’t going to make finding vulnerabilities their full-time job because if they do then there’ll be no one to build the thing in the first place.
> Except that's true even without LLMs.
Of course. That’s why I put it before I started taking into account LLMs. LLMs multiply the pre-existing imbalance.
> once attackers develop some skill, that skill could spread to all defenders through tools with the skill built into them
Sure, that’s an interesting point. I’m sure the attackers try to conceal their methods; the way we tend to find out about it is when an exploit is exhausted, stops being worth $xxxxxxxx, and starts to be sold on mass markets, at which point arguably it’s a bit late. Furthermore, you still mention those mystical “defenders”, as if you would expect an average software project to have any dedicated defenders.
(Edited my reply to the latest point, I didn’t read it correctly the first time.)