The point is that if the harness’ workflow gives contradictory and confusing instructions to the model, it’s a harness issue, not necessarily a model issue.
First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.
First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.