Does crapping on the average school's deep well of expertise for evaluating how effectively AI software solutions address their problems somehow fix the underlying problem (that the cost of catching cheaters is significantly higher than the cost of cheating)?
(This is roughly the same problem as evaluating software that only does an approximation of what it claims to do.)
(Aside: AI-based variations on this theme are in the early stages of proliferating across our society. They're being developed by many people using this forum and being sold to our schools, businesses, governments, and other organizations with little regard to whether they actually do what they claim.)
> that the cost of catching cheaters is significantly higher than the cost of cheating
This is tackling the problem from the wrong direction. The right direction would be to make it harder to cheat in the first place. For example: if the student submits an essay, and that student is able to coherently and accurately answer any questions asked about the essay in a face-to-face conversation, then that student is probably the genuine author of that essay.