Respectfully, unless someone is really really bad at articulating what the quality standards are or works with a very niche stack that is definitely not the case anymore with SOTA models
You're presuming too much about what OP's quality standards are. Can SOTA models outperform the average junior engineer? Yes, obviously. Can they match the best human engineers, if those humans were given all the time and interest in the world? Equally obviously not.
I use hundreds of millions of tokens a month, and LLMs have completely transformed the way I work. They're also, frankly, pretty mid programmers.
Respectfully, the current models are all trained on everyone else's legacy code as of roughly six months ago and largely always will be. If I'm doing my job right an LLM cannot meet my personal quality bar on its own because I will always need innovation and excellence they will never see and thus cannot deliver. I also think that training these tools on my personal quality bar is more work than just writing it myself.