> but in doing what I don't know as well.
Comments like these really help ground what I read online about LLMs. This matches how low performing devs at my work use AI, and their PRs are a net negative on the team. They take on tasks they aren’t equipped to handle and use LLMs to fill the gaps quickly instead of taking time to learn (which LLMs speed up!).
This is good insight, and I think honestly a sign of a poorly managed team (not an attack on you). If devs are submitting poor quality work, with or without LLM, they should be given feedback and let go if it keeps happening. It wastes other devs' time. If there is a knowledge gap, they should be proactive in trying to fill that gap, again with or without AI, not trying to build stuff they don't understand.
In my experience, LLMs are an accelerator; it merely exacerbates what already exists. If the team has poor management or codebase has poor quality code, then LLMs just make it worse. If the team has good management and communication and the codebase is well documented and has solid patterns already (again, with or without llm), then LLMs compound that. It may still take some tweaking to make it better, but less chance of slop.