I don't subscribe to this view but this is what some people might think:
LLMs aren't like any software we've made before (if we can even call them software). They act like humans: they can arrive at logical conclusions, they can make plans, they have "knowledge" and they say they have emotions. Who are we to say that they don't? They might not have human-level feelings, but dog-level feelings? Maybe.
I don't subscribe to this view but this is what some people might think:
LLMs aren't like any software we've made before (if we can even call them software). They act like humans: they can arrive at logical conclusions, they can make plans, they have "knowledge" and they say they have emotions. Who are we to say that they don't? They might not have human-level feelings, but dog-level feelings? Maybe.