I can see the dichotomy forming in the "post AI" world;
1) massive companies spending millions of tokens to write+secure their software
2) in the shadows, "elite" software contractors writing bespoke software to fulfill needs for those who can't afford the millions, or fix cracks in (1)
(Oh wait, I think this is what is happening now, anyway, minus the millions of tokens)
Does this mean all code written before Mythos is a liability?
people biting into what companies say about their own products had always been the frustration in cyber. now more than ever.
nothing is better or worse, basically as its always been.
if you think otherwise, stop ignoring the past.
I don't think open source will get stronger. Those who have enough GPU power won't depend on multiple human eyes anymore. AI will be enough.
I already see this happening: companies are moving toward AI-generated code (or forking projects into closed source), keeping their code private, AI written pipelines taking care of supply chain security, auditing and developing it primarily with AI.
At that point, for some companies, there's no real need for a community of "experts" anymore.
Am I the only one who thinks this is exactly like it was before AI, when we used small batch hand crafted tokens made by organic engineers to find vulnerabilities?
These mass-produced tokens are just cheaper...
> to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
If we take this at face value, it's not that different than how a great deal of executive teams believe cybersecurity has worked up to today. "If we spend more on our engineering and infosec teams, we are less likely to get compromised".
The only big difference I can see is timescale. If LLMs can find vulnerabilities and exploit them this easily (and I do take that with a grain of salt, because benchmarks are benchmarks), then you may lose your ass in minutes instead of after one dedicated cyber-explorer's monster energy fueled, 7-week traversal of your infrastructure.
I am still far more concerned about social engineering than LLMs finding and exploiting secret back doors in most software.
Please. Are we going to rely now in Anthropic et al to secure our systems? Wasn’t enough to rely on them to build our systems? What’s next? To rely on them for monitoring and observability? What else? Design and mockups?
Everything eventually turns into Bitcoin. That’s what I plan to see in the future years and decades.
In other news, token seller says tokens should be bought
I remain skeptical, security is not a notch that you can turn, you can't shove more money or more tokens and make the thing more security.
Not saying security will never be dominated by AI like it happened with chess, with maps, with Go, with language. But just braindead money to security pipeline? Skeptical.
Dijkstra would shake his head at our folly.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
Everything eventually turns into Bitcoin. That’s what I plan to see in the future years and decades. Satoshi just saw it first.
we did a lot of thinking around this topic. and distilled it into a new way to dynamically evaluate the security posture of an AI system (which can apply for any system for that matter). we wrote some thoughts on this here: https://fabraix.com/blog/adversarial-cost-to-exploit