logoalt Hacker News

_pdp_last Saturday at 10:35 AM8 repliesview on HN

LLMs need prompts. Prompts can get very big very quickly. The so called "skills", which exist in other forms in other platforms outside of Anthropic and OpenAI, are simply a mechanism to extend the prompt dynamically. The tool (scripts) that are part of the skill are no different then simply having the tools already installed in the OS where the agent operates.

The idea behind skills is sound because context management matters.

However, skills are different from MCP. Skills has nothing to do with tool calling at all!

You can implement your own version of skills easily and there is absolutely zero need for any kind of standard or a framework of sorts. They way to do is to register a tool / function to load and extend the base prompt and presto - you have implemented your own version of skills.

In ChatBotKit AI Widget we even have our own version of that for both the server and when building client-side applications.

With client-side applications the whole thing is implemented with a simple react hook that adds the necessary tools to extend the prompt dynamically. You can easily come up with your own implementation of that with 20-30 lines of code. It is not complicated.

Very often people latch on some idea thinking this is the next big thing hoping that it will explode. It is not new and it wont explode! It is just part of a suite of tools that already exist in various forms. The mechanic is so simple at its core that practically makes no sense to call it a standard and there is absolutely zero need to have it for most types of applications. It does make sense for coding assistant though as they work with quite a bit of data so there it matters. But skills are not fundamentally different from *.instruction.md prompt in Copilot or AGENT.md and its variations.


Replies

electric_muselast Saturday at 1:26 PM

> But skills are not fundamentally different from *.instruction.md prompt in Copilot or AGENT.md and its variations.

One of the best patterns I’ve see is having an /ai-notes folder with files like ‘adding-integration-tests.md’ that contain specialized knowledge suitable for specific tasks. These “skills” can then be inserted/linked into prompts where I think they are relevant.

But these skills can’t be static. For best results, I observe what knowledge would make the AI better at the skill the next time. Sometimes I ask the AI to propose new learnings to add to the relevant skill files, and I adopt the sensical ones while managing length carefully.

Skills are a great concept for specialized knowledge, but they really aren’t a groundbreaking idea. It’s just context engineering.

show 3 replies
cube2222last Saturday at 2:08 PM

The general idea is not very new, but the current chat apps have added features that are big enablers.

That is, skills make the most sense when paired with a Python script or cli that the skill uses. Nowadays most of the AI model providers have code execution environments that the models can use.

Previously, you could only use such skills with locally running agent clis.

This is imo the big enabler, which may totally mean that “skills will go big”. And yeah, having implemented multiple MCP servers, I think skills are a way better approach for most use-cases.

show 2 replies
lxgrlast Saturday at 10:51 AM

Many useful inventions seem blindingly obvious in hindsight.

Yes, in the end skills are just another way to manage prompts and avoid cluttering the context of a model, but they happen to be one that works really well.

show 1 reply
bg24last Saturday at 12:47 PM

With a little bit of experience, I realized that it makes sense even for agent to run commands/scripts for deterministic tasks. For example, to find a particular app out of a list of N (can be 100) with a complex filtering crietria, best option is to run a shell command to get specific output.

Like this, you can divide a job to be done into blocks of reasoning and deterministic tasks. The later are scripts/commands. The whole package is called skills.

PythonicNinjalast Sunday at 3:22 PM

added blog post about skills in AI and references to

Dotprompt / Claude / Dia browser skills - "Skills Everywhere: Portable Playbooks for Codex, Claude, and Dia"

https://pythonic.ninja/blog/2025-12-14-codex-skills/

btownlast Saturday at 2:02 PM

> [The] way to do is to register a tool / function to load and extend the base prompt and presto - you have implemented your own version of skills.

So are they basically just function tool calls whose return value is a constant string? Do we know if that’s how they’re implemented, or is the string inserted into the new input context as something other than a function_call_output?

show 1 reply
valstulast Saturday at 10:57 AM

> However, skills are different from MCP. Skills has nothing to do with tool calling at all

Although skills require that you have certain tools available like basic file system operations so the model can read the skills files. Usually this is implemented as ephemeral "sandbox environment" where LLM have access to file system and can also execute python, run bash commands etc.

kelvinjps10last Saturday at 1:26 PM

Isn't the simplicity of the concept, that will make it "explode"?