I tend to be skeptical but listening to the linked podcast with Carlini and found him very credible–not a sales guy, not an AI doomer, but someone talking about how little work he had to do to find real exploits in heavily-fuzzed code. I think there’s still a safe bet that many apps will be cumbersome to attack but I think it’s still going to happen faster than I used to think.
https://securitycryptographywhatever.com/2026/03/25/ai-bug-f...
Thanks. Watched most of this talk and, unless I missed something, it seems to confirm what I was thinking—most of the strength currently comes from the scale you can deploy LLMs at, not them being better at vulnerability research than humans (if you factor out the throughput). And since this is a relatively new development, nobody really knows right now if this is going to have a greater impact than fuzzers and static analyzers had, or if newer models are ever going to get to a level that'd make computer security a solved problem.
Nicholas Carlini is the real deal. He was most recently on the front page for "How to win a best paper award", about his experience winning a series of awards at Big 4 academic security conferences, mostly recently for work he coauthored with Adi Shamir (I'm just namedropping the obvious name) on stealing the weights from deep neural networks. Before all that (and before he got his doctorate), he and Hans Nielsen wrote the back half of Microcorruption.
He's not a sales guy.