I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"
If you paste a long enough prompt into either GPT or Claude they turn it into an attachment, so it can happen. I think it's invisible to the model, but somehow not to the summarizer.
If you paste a long enough prompt into either GPT or Claude they turn it into an attachment, so it can happen. I think it's invisible to the model, but somehow not to the summarizer.