What's your target false positive rate?
I mean, obviously you can't know your actual error rates, but it seems useful to estimate a number for this and to have a rough intuition for what your target rate is.
Did chatGPT write this response?
This is how LLMs poison the discourse.
This is how LLMs poison the discourse.