>LLMs can't be strategic because they do not understand the big picture
While I do tend to believe you, what evidence based data do you have to prove this is true?
Prompt-injection in all its forms. If the hyper-mad-libs machine doesn't reliably "understand" and model the difference between internal and external words, how can we trust them to model fancier stuff?
> While I do tend to believe you, what evidence based data do you have to prove this is true?
IMO the onus is to prove that they can be strategic. Otherwise you're asking me to prove a negative.