This will absolutely help but to the extent that prompt injection remains an unsolved problem, an LLM can never conclusively determine whether a given skill is truly safe.