logoalt Hacker News

blargeyyesterday at 7:53 PM1 replyview on HN

I remember when people were discussing the “performance-improving” hack of formulating their prompts as panicked pleas to save their job and household and puppy from imminent doom…by coding X. I wonder if the backfiring is a more recent phenomenon in models that are better at “following the prompt” (including the logical conclusion of its emotional charge), or it was just bad quantification of “performance” all along.


Replies

Loquebanturyesterday at 8:17 PM

The central point here is the presence of functional circuits in LLMs that act effectively on observable behavior just like emotions do in humans.

When you can't differentiate between two things, how are they not equal? People here want "things" that act exactly like human slaves but "somehow" aren't human.

To hide behind one's ignorance about the true nature of the internal state of what arguably could represent sentience is just hubris? The other way around, calling LLMs "stochastic parrots" without explicitly knowing how humans are any different is just deflection from that hubris? Greed is no justification for slavery.