So many problems with this:
The benchmark is totally useless. It measures single prompts, and only compares output tokens with no regard for accuracy. I could obliterate this benchmark with the prompt "Always answer with one word"
This line: "If a user corrects a factual claim: accept it as ground truth for the entire session. Never re-assert the original claim." You're totally destroying any chance of getting pushback, any mistake you make in the prompt would be catastrophic.
"Never invent file paths, function names, or API signatures." Might as well add "do not hallucinate".
“Make no mistakes”
Prompt engineering is back? I think not: I got no better results for one or two years now using meta-prompts that are generic and/or from the internet.