> Anything but the simplest tooling is not transferable between model generations, let alone completely different families.
It is transferable-yes, you will get issues if you take prompts and workflows tuned for one model and send them to another unchanged. But, most of the time, fixing it is just tinkering with some prompt templates
People port solutions between models all the time. It takes some work, but the amount of work involved is tractable
Plus: this is absolutely the kind of task a coding agent can accelerate
The biggest risk is if your solution is at the frontier of capability, and a competing model (even another frontier model) just can’t do it. But a lot of use cases, that isn’t the case. And even if that is the case today, decent odds in a few more months it won’t be
Ha. Sounds a lot like the one 10x vs. predictable mediocre guys with a scaffolding of processes. Aim high and hit or miss or try to grind predictably and continuously. Same with humans and depends on the loss you can afford.
Yep. My approach has been, if I can’t reliably get something to 90+% with a flash / nano / haiku, then it’s not viable for any accuracy critical work. (I don’t know of or have the luck of having any other work.) Starting out with the pro / opus for any production classification work has always been a trick.