This is a classic example highlighting the upside of local llms.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.