logoalt Hacker News

_pdp_yesterday at 11:14 PM1 replyview on HN

IMHO, N8n isn't great if you care about security.

It's not that the tool itself is inherently insecure - it's more about how users are encouraged to use it.

Nearly all workflows built using N8n that I've seen face some kind of prompt injection vulnerability. This is primarily because, in most cases, you configure the LLM by directly inserting external data into the system prompt. As many of you know, the system prompt has the highest execution priority, meaning instructions placed there can heavily influence how the LLM interacts with its tools.

While this isn't exploitable in every situation, it can often be exploited rather generically: by embedding prompts in your social media bio, website, or other locations from where these workflows pull data. Recently, I've managed to use this technique to prompt a random LinkedIn bot to email me back a list of their functions. That's not overly exciting in itself, but it clearly demonstrates the potential for malicious use.

This issue is not specific to N8n. Other tools do it too. But it seems to me there is little to no awareness that this is in fact a problem.

There is a better, safer way to incorporate external data into LLM prompts without jumping through hoops, but unfortunately, that's not how things are currently done with N8n, at least as of today.


Replies

moralestapiayesterday at 11:32 PM

What's the safe alternative?

show 2 replies