Absolutely. Even worse, when you ask AI to solve a problem it almost always adds code even if a better solution exists that removes code. If AI's new solution fails, you ask it to fix, it throws even more code, creates more mess, introduces new unnecessary states. Rinse, repeat ad infinitum.
I did this a few times as an experiment while knowing how a problem could be solved. In difficult situations Cursor always invariably adds code and creates even more mess.
I wonder if this can be mitigated somehow at the inference level because prompts don't seem to be helping with this problem.
Same thing happens with infrastructure config. Ask an AI to fix a security group issue and it'll add a new rule instead of fixing the existing one. You end up with 40 rules where 12 would do and nobody knows which ones are actually needed anymore.