The more interesting question I have is if such Prompt Injection Attacks can ever be actualy avoided, with how GenAI works.
They could be if models were trained properly, with more carefully delineated prompts.
Perhaps not, and it is indeed not unwise from Apple to stay away for a while given their ultra-focus on security.
Removing the risk for most jobs should be possible. Just build the same cages other apps already have. Also add a bit more transparency, so people know better what the machine is doing, maybe even with a mandatory user-acknowledge for potential problematic stuff, similar to how we have root-access-dialogues now. I mean, you don't really need access to all data, when you are just setting a clock, or playing music.