That's not what is happening right now. The bugs are often filtered later by LLMs themselves: if the second pipeline can't reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny. Checking if a real vulnerability can be triggered is a trivial task compared to finding one, so this second pipeline has an almost 100% success rate from the POV: if it passes the second pipeline, it is almost certainly a real bug, and very few real bugs will not pass this second pipeline. It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness. This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.
I’ve been around long enough to remember people saying that VMs are useless waste of resources with dubious claims about isolation, cloud is just someone else’s computer, containers are pointless and now it’s AI. There is a astonishing amount of conservatism in the hacker scene..
Can we study this second pipeline? Is it open so we can understand how it works? Did not find any hints about it in the article, unfortunately.
> This is expected in the normal population
A lot of people regardless of technical ability have strong opinions about what LLMs are/are-not. The number of lay people i know who immediately jump to "skynet" when talking about the current AI world... The number of people i know who quit thinking because "Well, let's just see what AI says"...
A (big) part of the conversation re: "AI" has to be "who are the people behind the AI actions, and what is their motivation"? Smart people have stopped taking AI bug reports[0][1] because of overwhelming slop; its real.
[0] https://www.theregister.com/2025/05/07/curl_ai_bug_reports/
[1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
> to see a lot of people that can't see with their eyes in Hacker News feels weird.
Turns out the average commenter here is not, in fact, a "hacker".
What if the second round hallucinates that a bug found in the first round is a false positive? Would we ever know?
> It does not matter how much LLMs advance, people ideologically against them will always deny they have an enormous amount of usefulness.
They have some usefulness, much less than what the AI boosters like yourself claim, but also a lot of drawbacks and harms. Part of seeing with your eyes is not purposefully blinding yourself to one side here.
they are useful to those that enjoy wasting time.
>This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.
You are replying to an account created in less than 60 days.
> Checking if a real vulnerability can be triggered is a trivial task compared to finding one
Have you ever tried to write PoC for any CVE?
This statement is wrong. Sometimes bug may exist but be impossible to trigger/exploit. So it is not trivial at all.