logoalt Hacker News

astrangelast Thursday at 6:42 PM1 replyview on HN

It also gets confused if the entire prompt is in a text file attachment.

And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".


Replies

FeepingCreaturelast Friday at 9:09 AM

I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"

show 1 reply