I find it somewhat telling that most (not all) of this thread doesn't even attempt to find an answer to the questions posed by the OP but flatly denies the problem of psychological harm exists at all.
I feel this is an example of the two larger narratives about AI that currently seem to be forming:
For one side, AI is basically every harmful technology ever invented rolled into one: It's harmful to the environment (via waste of energy and resources), it's harmful to the information space (through polluting everything with slop and devaluing human expression), it's harmful to society (by encouraging ever more badly done and unreliable products, by taking away jobs and by replacing human-to-human interaction, by normalizing a mode of development where not even the developers understand what is going on) and it's harmful to whoever uses it personally (by causing ever-growing dependence on AI, either only by skills or even emotionally or psychically, up to the point of AI psychosis and preferring AI agents to other humans).
For the other side, AI is the future, the next industrial revolution, the thing that you have to adapt or will be left behind, possibly even the next stage of evolution.
Right now, I feel every side is digging in and trying ever harder to ignore the other side.
(The AI labs acknowledge "AI risks" in theory - but, as the article pointed out, the risks they perceive and ostensibly work against are so abstract and removed from the everyday use of AI that they more make the point of AI proponents)
I feel the end result of this growing tension is the Molotov cocktail in Sam Altmann's home.
I'd really like to know more what the tech community at large is trying to do about this rift.