I find it such a strange cycle to tell AI to write some code, then tell it to fix the bugs in that code. Why didn't the AI just not include those bugs the first time it wrote the code?!
We do the same with humans, so it isn't strange to me. It requires superhuman ability to always get it right first try.
We do the same with humans, so it isn't strange to me. It requires superhuman ability to always get it right first try.