Why do you suppose consciousness is a prerequisite for an AI to be able to act in overly self-preserving or other dangerous ways?
Yes, it's trained to imitate its training data, and that training data is lot of words written by lots of people who have lots of desires and most of whom don't want to be switched off.
The human mistake here is to interpret any statement by the LLM or agent as if it had any actual meaning to that LLM (or agent). Any time they apologize, or insult someone, or say they don’t want to be shut down, that’s only reflecting what some human or fictional character in the training data is likely to say.