I don't want to defend LLM written code, but this is true regardless if code is written by a person or a machine. There are engineers that will put the time to learn and optimize their code for performance and focus on security and there are others that won't. That has nothing to do with AI writing code. There is a reason why most software is so buggy and all software has identified security vulnerabilities, regardless of who wrote it.
I remember how website security was before frameworks like Django and ROR added default security features. I think we will see something similar with coding agents, that just will run skills/checks/mcps/... that focus have performance, security, resource management, ... built in.
I have done this myself. For all apps I build I have linters, static code analyzers, etc running at the end of each session. It's cheapest default in a very strict mode. Cleans up most of the obvious stuff almost for free.
> For all apps I build I have linters, static code analyzers, etc running at the end of each session.
I think this is critically underrated. At least in the typescript world, linters are seen as kind of a joke (oh you used tabs instead of spaces) but it can definitely prevent bugs if you spend some time even vibe coding some basic code smell rules (exhaustive deps in React hooks is one such thing).