OK, consider a for loop that goes through your repo, then goes through each file, and then goes through each common vulnerability...
Is Mythos some how more powerful than just a recursive foreloop aka, "agentic" review. You can run `open code run --command` with a tailored command for whatever vulnerabilities you're looking for.
newer models have larger context windows, and more stable reasoning across larger context windows.
If you point your model directly at the thing you want it to assess, and it doesn't have to gather any additional context you're not really testing those things at all.
Say you point kimi and opus at some code and give them an agentic looping harness with code review tools. They're going to start digging into the code gathering context by mapping out references and following leads.
If the bug is really shallow, the model is going to get everything it needs to find it right away, neither of them will have any advantage.
If the bug is deeper, requires a lot more code context, Opus is going to be able to hold onto a lot more information, and it's going to be a lot better at reasoning across all that information. That's a test that would actually compare the models directly.
Mythos is just a bigger model with a larger context window and, presumably, better prioritization and stronger attention mechanisms.