since the suggestion is that the new security bug finding LLMs will increase protection because it will have access to the full source code then, the dark forest fear would be, if it is possible for an attacker to get all the source the attacker will be in a better position.
This seems wrong however, as it ignores the arrow of time. The full source code has been scanned and fixed for things that LLMs can find before hitting production, anyone exfiltrating your codebase can only find holes in stuff with their models that is available via production for them to attack and that your models for some reason did not find.
I don't think there is any reason to suppose non-nation state actors will have better models available to them and thus it is not a dark forest, as nation states will probably limit their attacks to specific things, thus most companies if they secure their codebase using LLMs built for it will probably be at a significantly more secure position than nowadays and, I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.
Furthermore, the worry about nation state attackers still assumes that they will have better models and not sure if that is likely either.
Does this have anything to do with the other 'dark forest'? https://en.wikipedia.org/wiki/Dark_forest_hypothesis
I don't see the connection.
I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.
It's rarely the systems that are the weak link, rather the humans with backdoor access.
Any single company might be able to proactively defend themselves from attackers, but will companies invest the tokens in this? Most people simply don't care until it's too late.
And in a world where companies begin to suffer from attacks as a result - can the ones who are willing to invest in security defend themselves, not just against cyberattackers, but against a broader investor and customer backlash that believes that startups that build their own technology stacks are riskier due to perceptions about cybersecurity?
An angel investor or LP who sees news articles and media about cyberattacks, then has a portfolio company get hacked in a material way, may simply decide the space has become too risky for further investments, no matter how much prospects get on better security footings.
The dark forest hypothesis, at its core, is about a decision of whether to put your neck out in the universe; if the weapons and countermeasures being used are too horrifying to fathom, the risks unquantifiable, one chooses not to extend one's neck. And that is how an industry begins to dry.