Do these skills actually provide much value? Like, how much better are they than something that I could tell Claude to generate based on a single API doc from Slack/Trello?
From my experience, most are just some high level instructions on how to use CLI tools installed on the system. A lot of the CLI tools they're calling out to have 0 reputation on Github or don't work at all.
I've had more luck writing my own skills using CLI tools I know and trust.
Skills is actually what also Claude code uses internally, it's cool because the llm will load the whole context on how to use it only on demand and keeps the context cleaner.
My understanding is that it's just an abstraction layer that feeds right into the context window. Might as well just feed it into the prompt. I think cursor even proved that skills aren't as good as direct prompts (or something to that extent, can't remember exactly)
>Do these skills actually provide much value?
IMO, yes. Gemini et. al. out of the box are good at composing, but are entirely passive. Skills enable you to - easily, with low code/no code - teach your AI to perform active tasks either upon direction or under any automatic conditions you specify. This is incredibly powerful. Incredibly dangerous, too, but so is a car when compared with a skateboard.
Zero. If a skill actually provides value, one of two things happens: it gets absorbed into Claude Code (or similar) within a week, or a company packages it up and charges real money for it. The "free skill that gives you an edge" window is essentially nonexistent. By the time you find it, everyone else has it too. You're better off learning to prompt well against raw API docs than chasing a library of pre-built skills that are either trivial to recreate or about to be made redundant.