> The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.
It's astonishing that AI companies don't know about SQL injection attacks and how a prompt requires the same safeguards.
There’s a known fix for SQL injection and no such known fix for prompt injection
But you can't, can you? Everything just goes into the context...
No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob