I don’t think there’s any solution to what SimonW calls the lethal trifecta with it, so I’d say that’s still pretty impossible.
I saw on The Verve that they partnered with the company that repeatedly disclosed security vulnerabilities to try to make skills more secure though which is interesting: https://openclaw.ai/blog/virustotal-partnership
I’m guessing most of that malware was really obvious, people just weren’t looking, so it’s probably found a lot. But I also suspect it’s essentially impossible to actually reliably find malware in LLM skills by using an LLM.
Honestly, 'malware' is just the beginning it's combining prompt injection with access to sensitive systems and write access to 'the internet' is the part that scares me about this.
I never want to be one wayward email away from an AI tool dumping my company's entire slack history into a public github issue.
Regarding prompt injection: it's possible to reduce the risk dramatically by: 1. Using opus4.6 or gpt5.2 (frontier models, better safety). These models are paranoid. 2. Restrict downstream tool usage and permissions for each agentic use case (programmatically, not as LLM instructions). 3. Avoid adding untrusted content in "user" or "system" channels - only use "tool". Adding tags like "Warning: Untrusted content" can help a bit, but remember command injection techniques ;-) 4. Harden the system according to state of the art security. 5. Test with red teaming mindset.