Thanks for the response!
> it'd be nice to see that effort routed into creating new data-efficient RL algorithms or something that pick up all the slack that distillation is currently carrying
It seems to me like they're already doing that. Some of the most fun I've had actually is reading their papers on the different R.L. environments, especially Egentic ones they set up and the various new algorithms they use to do RL and training in general. Combine that with how much they are innovating with attention mechanisms and I feel like distillation doesn't seem to be really replacing research into these means as just supplementing it — and maybe even making it possible in the first place, because otherwise it would be simply too expensive to get a reasonably intelligent model to experiment with!
> But now that people are actually using LLMs as agents to _do things_, and interact with the open web, and interact with their personal data and sensitive information, the safety and security concerns make a lot more sense to me.
Ah, I see what you mean. Can you point me to any benchmarks or research on how good various models are out of waiting social engineering and prompt injection attacks? That would be extremely interesting to me. Fundamentally, though, I don't think that's really a soluble problem either, and the right approach is to surround an agent with a sufficiently good harness to prevent that. Perhaps with an approach like this:
https://simonwillison.net/2023/Apr/25/dual-llm-pattern/
Or this, which builds on it with more verifiable machinery, if you're less bitter-lesson pilled (like me):
https://simonwillison.net/2025/Apr/11/camel/
> That is, it's less about what's right and wrong by conventional wisdom, and more about what consequences are downstream of various incentives.
Ahhh, I see. Yeah, that could be negative. That's worth thinking about.