> You can’t get mad at an experiment for not happening in the future.
I’m more getting mad at this sentence not making any sense. I’m disappointed at this experiment for not testing the actual capabilities of an LLM. Comprende?
> They simulated common end user behavior
Not the way you use it. And not the way it will be used.
You love it because you want it to stay this way so you can forever believe AI will never be better than you.
Bro the reality is unfolding as you speak. It’s like humanity just discovered guns but hasn’t discovered the bullets and your saying guns are useless because most of humanity hasn’t figured out bullets yet.
> We’ve gone from “this study is flawed because language models don’t do that” to “this study is flawed because while language models do do that, I don’t think that they will in the future” to “data that could support a bias other than my own is bad”
This is a flat out lie. Models DO do that. The only fucking argument you have is that non technical and average laymen people edit documents the wrong way while all people who use agentic AI as adepts use it the correct way. Like are you fucking kidding me?
The only change I acknowledge is your grandma copies and pastes essays into ChatGPT while YOU don’t. You go pretend you live in that reality where the bullets will never appear.
>You love it because you want it to stay this way so you can forever believe AI will never be better than you.
>Bro the reality is unfolding as you speak
>You go pretend you live in that reality where the bullets will never appear.
It’s too late bro, roko’s basilisk was real and it’s already punishing you