AI doesn’t understand that’s the problem.
If the training data contains some mistakes often it will reproduce them more likely.
Unless there are preprogrammed rules to prevent them.
I’ve had really good results, but of course ymmv
As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply
That’s by no means infalible, but they’ve come a long way even just in the last 12 months
I’ve had really good results, but of course ymmv
As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply
That’s by no means infalible, but they’ve come a long way even just in the last 12 months