These kinds of instructions are the main added value of LLMs and I use them every day. Even though 30%-60% the output is wrong/irrelevant, the rest is helpful enough. After the human reviews it, the overall quality of the codebase increases, not decreases. This is on the opposite end of the spectrum when compared to agentic coding, though.