Respectfully, the current models are all trained on everyone else's legacy code as of roughly six months ago and largely always will be. If I'm doing my job right an LLM cannot meet my personal quality bar on its own because I will always need innovation and excellence they will never see and thus cannot deliver. I also think that training these tools on my personal quality bar is more work than just writing it myself.
LLMs are plenty innovative and generate good quality code by default, and great quality code when directed well.
If you're not seeing this, at best you're probably unable to direct them or use them well.
FWIW, if you don't believe the above, I challenge you to put up a quick git repo, where you are unable to get the deserved quality out, and we can quickly show you how the same quality is available via SOTA agents, within a fraction of hand-coded time.
High quality c++ code today looks exactly same as it did six months, hell, six years ago. Innovation doesn’t come from code tricks