Before AI happened I watched youtube. Occasionally I encountered there very convincing arguments. Same person often made very convincing arguments on many subjects.
But noticed that the closer the domain they were talking about was to my area of competence the less convincing their arguments were. There were more holes, errors and wrong conclusions.
I recalibrated my bs meter thanks to that.
Since AI came I successfully used this strategy of being extremely cautious towards convincing arguments to not become mislead by AI.
However this year I'm working with AI more in the domain of software development. Where I can see the competence. And I see the competence. This had opposite effect on me. I tend to trust AI outside my domain of expertise much more after I saw what can it do in software.
One caveat though is that there are a lot of areas of human culture where there's very little actual knowledge, but a lot of opinions, like politics, economy, diet, business, health. I still don't trust AI in those domains. But then again, I don't trust humans there either.
For me basically AI achieved the threshold of useful reliability for any domain that humans are reliable at.
I don't really care about sycophancy. I might have a slight advantage that I don't talk to AI in my native language. So its responses don't have a direct line to my emotions.