I don't understand why the takeaway here is (unless I'm missing something), more or less "everything is going to get exploited all the time". If LLMs can really find a ton of vulnerabilities in my software, why would I not run them and just patch all the vulnerabilities, leading to perfectly secure software (or, at the very least, software for which LLMs can no longer find any new vulnerabilities)?
The pressure to do so will only happen as a consequence of the predicted vulnerability explosion, and not before it. And it will have some cost, as you need dedicated and motivated people to conduct the vulnerability search, applying the fixes, and re-checking until it comes up empty, before each new deployment.
The prediction is: Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.
That might be one outcome, especially for large, expertly-staffed vendors who are already on top of this stuff. My real interest in what happens to the field for vulnerability researchers.
Attackers only have to be successful once while defenders have to be successful all the time?
I've worked at companies before where they have balked at spending $300 to buy me a second hand thinkpad because I really wanted to work on a Linux machine rather than a mac. I don't see them throwing $unlimited at tokens to find vulnerabilities, at least until after it's too late.
My sense is that the asymmetry is non-trivial issue here. In particular, a threat actor needs one working path, defenders need to close all of them. In practice, patching velocity is bounded by release cycles, QA issues / regression risk, and a potentially large number of codebases that need to be looked at.
Find-then-patch only works if you can fix the bugs quicker than you’re creating new ones.
Some orgs will be able to do this, some won’t.
> If LLMs can really find a ton of vulnerabilities in my software, why would I not run them and just patch all the vulnerabilities, leading to perfectly secure software?
Probably because it will be a felony to do so. Or, the threat of a felony at least.
And this is because it is very embarrassing for companies to have society openly discussing how bad their software security is.
We sacrifice national security for the convenience of companies.
We are not allowed to test the security of systems, because that is the responsibility of companies, since they own the system. Also, companies who own the system and are responsible for its security are not liable when it is found to be insecure and they leak half the nations personal data, again.
Are you seeing how this works yet? Let's not have anything like verifiable and testable security interrupt the gravy train to the top. Nor can we expect systems to be secure all the time, be reasonable.
One might think that since we're all in this together and all our data is getting leaked twice a month, we could work together and all be on the lookout for security vulnerabilities and report them responsibly.
But no, the systems belong to companies, and they are solely responsible. But also (and very importantly) they are not responsible and especially they are not financially liable.
Takeaway is formal software.
Because not all software gets auto-updated. Most of it does not!
closed source software
deliberate vulnerabilities (thanks nsa)
Any patch you ship lands on a moving treadmill of releases and deps, with new code stapled onto old junk and old assumptions leaking into the next version. Attackers can run the same models you do, so the gap between finding and fixing bugs shrinks until your team are doing janitorial work against a machine.
"Perfectly secure" software is a philosophy seminar, not an outcome. You can cut the bug pool down a lot, but the tide keeps coming and the sandcastle still falls over.
[dead]
When did we enter the twilight zone where bug trackers are consistently empty? The limiting factor of bug reduction is remediation, not discovery. Even developer smoke testing usually surfaces bugs at a rate far faster than they can be fixed let alone actual QA.
To be fair, the limiting factor in remediation is usually finding a reproducible test case which a vulnerability is by necessity. But, I would still bet most systems have plenty of bugs in their bug trackers which are accompanied by a reproducible test case which are still bottlenecked on remediation resources.
This is of course orthogonal to the fact that patching systems that are insecure by design into security has so far been a colossal failure.