"Find any inconsistencies that should be addressed in this codebase according to DRY and related best practices"
This doesn't hurt to try and will give valuable and detailed feedback much more quickly than even an experienced developer seeing the project for the first time.
These kinds of instructions are the main added value of LLMs and I use them every day. Even though 30%-60% the output is wrong/irrelevant, the rest is helpful enough. After the human reviews it, the overall quality of the codebase increases, not decreases. This is on the opposite end of the spectrum when compared to agentic coding, though.