logoalt Hacker News

philipallstaryesterday at 6:48 PM3 repliesview on HN

> The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.

It's astonishing that AI companies don't know about SQL injection attacks and how a prompt requires the same safeguards.


Replies

WickyNilliamsyesterday at 9:13 PM

No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob

arjvikyesterday at 7:11 PM

There’s a known fix for SQL injection and no such known fix for prompt injection

rawlingyesterday at 7:11 PM

But you can't, can you? Everything just goes into the context...

show 1 reply