logoalt Hacker News

spr-alexyesterday at 8:30 PM2 repliesview on HN

I interned for the author at 18. I assumed security testing worked like this:

1. Static analysis catches nearly all bugs with near-total code coverage

2. Private tooling extends that coverage further with better static analysis and dynamic analysis, and that edge is what makes contractors valuable

3. Humans focus on design flaws and weird hardware bugs like cryptographic side-channels from electromagnetic emanations

Turns out finding all the bugs is really hard. Codebases and compiler output have exploded in complexity over 20 years which has not helped the static analysis vision. Todays mitigations are fantastic compared to then, but just this month a second 0day chain got patched on one of the best platforms for hardware mitigations.

I think LLMs get us meaningfully closer to what I thought this work already was when I was 18 and didn't know anything.


Replies

cartoonworldyesterday at 8:46 PM

lots of security issues form at the boundaries between packages, zones, services, sessions, etc. Static analysis could but doesn't seem to catch this stuff from my perspective. Bugs are often chains and that requires a lot of creativity, planning etc

consider logic errors and race conditions. Its surely not impossible for llm to find these, but it seems likely that you'll need to step throught the program control flow in order to reveal a lot of these interactions.

I feel like people consider LLM as free since there isn't as much hand-on-keyboard. I kinda disgree, and when the cost of paying out these vulns falls, I feel like nobody is gonna wanna eat the token spend. Plenty of hackers already use ai in their workflows, even then it is a LOT OF WORK.

Legend2440yesterday at 8:57 PM

Catching all bugs with static analysis would involve solving the halting problem, so it's never going to happen.

show 2 replies