Agree for a general AI assistant, which has the same permissions and access as the assisted human => Disaster. I experimented with OpenClaw and it has a lot of issues. The best: prompt injection attacks are "out of scope" from the security policy == user's problem. However, I found the latest models to have much better safety and instruction following capabilities. Combined with other security best practices, this lowers the risk.
> I found the latest models to have much better safety and instruction following capabilities. Combined with other security best practices, this lowers the risk.
It does not. Security theater like that only makes you feel safer and therefore complacent.
As the old saying goes, "Don't worry, men! They can't possibly hit us from this dist--"
If you wanna yolo, it's fine. Accept that it's insecure and unsecurable and yolo from there.