I think the point is it smells like a hack, just like "think extra hard and I'll tip you $200" was a few years ago. It increases benchmarks a few points now but what's the point in standardizing all this if it'll be obsolete next year?
Standards have to start somewhere to gain traction and proliferate themselves for longer than that.
Plus, as has been mentioned multiple times here, standard skills are a lot more about different harnesses being able to consistently load skills into the context window in a programmatic way. Not every AI workload is a local coding agent.
I think this tweet sums it correctly doesn't?
Which is essentially the bitter lesson that Richard Sutton talks about?