It's always the inconsistencies which amaze me, from the article:
> I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet
You have "so many?" Are they uncountable for some reason? You "haven't validated" them? How long does that take?
> found a total of five Linux vulnerabilities
And how much did it cost you in compute time to find those 5?
These articles are always fantastically light on the details which would make their case for them. Instead it's always breathless prognostication. I'm deeply suspicious of this.
I'd be interested in how it compares (in terms of time, money and false positives) with fuzzing.
You are suspicious because you probably haven't worked anywhere that's AI-first. Anyone that's worked at a modern tech company will find this absolutely believable.
Like what, you expect Nicholas to test each vuln when he has more important work to do (ie his actual job?)
>And how much did it cost you in compute time to find those 5?
This is the last thing I'd worry about if the bug is serious in any way. You have attackers like nation states that will have huge budgets to rip your software apart with AI and exploit your users.
Also there have been a number of detailed articles about AI security findings recently.