the adaptive thinking complaints in this thread are interesting because they are basically the same verifier quality problem showing up in a different costume the model has to decide how hard to think before knowing how hard the problem is and that meta decision is itself a hard problem that nobody has solved cleanly not in RL not in speculative decoding not in branch prediction, the fact that disabling adaptive thinking and forcing high effort restores quality tells us the router is underthinning not that the model got worse which means anthropic is trading user experience for compute savings whether or not they frame it that way