logoalt Hacker News

thethimbleyesterday at 4:23 PM0 repliesview on HN

This will absolutely help but to the extent that prompt injection remains an unsolved problem, an LLM can never conclusively determine whether a given skill is truly safe.