Yes, pretty much.
LLM-powered agents are surprisingly human-like in their errors and misconceptions about less-than-ubiquitous or new tools. Skills are basically just small how-to files, sometimes combined with usage examples, helper scripts etc.