> They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong.
Sorry, anonymous people on reddit aren't a good comparison. This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating, and that's who most people would go to otherwise.
Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
Or how about the example of a close friend in a relationship or making a career choice that's terrible for them? It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.
IME, LLMs will shoot holes in your ideas and it will efficiently do so. All you need to do ask it directly. I have little doubt that it outperforms most people with some sort of friendship, relationship or employment structure asked the same question. It would be nice to see that studied, not against reddit commenters who already self-selected into answering "AITA".
“AI is nicer than the average redditor” would be a more accurate title
What's your research background in this area?
>Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
This drives me nuts as a leader. There are times where yes, please just listen, and if this is one of those times, I'll likely tell you, but goddamnit, speak up. If for no other reason I might not have thought of what you've got to say. Then again, I also understand most boss types aren't like me, thus everyone ends up conditioned to not bloody collaborate by the time they get to me. It's a bad sitch all the way around.
Not only that, but subreddits like r/AmITheAsshole are full of AI slop. Both in the comments and in the posts. It's a huge karma mining operation for bots.
> This needs to be studied against people in real life who have a social contract of some sort... IME, LLMs will shoot holes in your ideas and it will efficiently do so.
The Krafton / Subnatuica 2 lawsuit paints a very different picture. Because "ignored legal advice" and "followed the LLM" was a choice. Do you think someone who has conversation where "conviction" and "feelings" are the arbiters of choice are going to buy into the LLM push back, or push it to give a contrived outcome?
The LLM lacks will, it's more or less a debate team member and can be pushed into arguing any stance you want it to take.
> Sorry, anonymous people on reddit aren't a good comparison.
Yeah especially on r/AmITheAsshole. Those comments never advocate for communication, forgiveness and mending things with family.