Every psychologist and therapist I have talked to about using LLMs in place of personal interactions (just discussion about this topic) have all said roughly the same thing:
Any attempt to use LLMs as a substitute for personal interaction is playing an incredibly dangerous game that will probably make them a lot of money, while hurting a lot of people.
You might want to read again who the patient was. Because: obviously not going to happen, no matter how bad the AI is ...
Oh and taking sycophancy out of a model is easy. Just finetune out that they (have to) agree with everything. Plus every new model has less of it, or at least masks it better.