Don't focus on what you prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way. MCP adds friction, imagine doing yourself the work using the average MCP server. However, skills alone are not sufficient if you want, for instance, creating the ability for LLMs to instrument a complicated system. Work in two steps:
1. Ask the LLM to build a tool, under your guide and specification, in order do a specific task. For instance, if you are working with embedded systems, build some monitoring interface that allows, with a simple CLI, to do the debugging of the app as it is working, breakpoints, to spawn the emulator, to restart the program from scratch in a second by re-uploading the live image and resetting the microcontroller. This is just an example, I bet you got what I mean.
2. Then write a skill file where the usage of the tool at "1" is explained.
Of course, for simple tasks, you don't need the first step at all. For instance it does not make sense to have an MCP to use git. The agent knows how to use git: git is comfortable for you, to use manually. It is, likewise, good for the LLM. Similarly if you always estimante the price of running something with AWS, instead of an MCP with services discovery and pricing that needs to be queried in JSON (would you ever use something like that?) write a simple .md file (using the LLM itself) with the prices of the things you use most commonly. This is what you would love to have. And, this is what the LLM wants. For complicated problems, instead, build the dream tool you would build for yourself, then document it in a .md file.
This is how I work with my agent harness. Also have skills for writing tools and skills.
And I still think ppl dont understand why MCPs are still needed and when to use them.
Its actually pretty simple.
Feels to me like the toolchain for using LLMs in various tasks is still in flux (i interpret all of this as "stuff in different places like .md or skills or elsewhere that is appended to the context window" (i hope that is correct)). Shouldnt this overall process be standardized/automated? That is, use some self-reflection to figure out patterns that are then dumped into the optimal place, like a .md file or a skill?
This is exactly what I do too. Works very well. I have a whole bunch of scripts and cli tools that claude can use, most of them was built by claude too. I very rarely need to use my IDE because of this, as I've replicated some of Jetbrains refactorings so claude doens't have to burn tokens to do the same work. It also turns a 5 minute claude session into a 10 second one, as the scripts/tools are purpose made. Its reallly cool.
edit: just want to add, i still haven't implemented a single mcp related thing. Don't see the point at all. REST + Swagger + codegen + claude + skills/tools works fine enough.
This is my life motto. Progressive exploration, codifying, use your codified workflows.
> for each desired change, make the change easy (warning: this may be hard), then make the easy change - Kent Beck
> For instance it does not make sense to have an MCP to use git.
What if you don’t want the AI to have any write access for a tool? I think the ability to choose what parts of the tool you expose is the biggest benefit of MCP.
As opposed to a READ_ONLY_TOOL_SKILL.md that states “it’s important that you must not use any edit API’s…”
> MCP adds friction, imagine doing yourself the work using the average MCP server.
Why on earth don't people understand that MCP and skills are complementary concepts, why? If people argue over MCP v. Skills they clearly don't understand either deeply.
Although the author is coming from a place of security and configuration being painful with Skills, I think the future will be a mix of MCP, Agents and Skills. Maybe even a more granular defined unit below a skill - a command...
These commands would be well defined and standardised, maybe with a hashed value that could be used to ensure re-usability (think Docker layers).
Then I just have a skill called:
- github-review-slim:latest - github-review-security:8.0.2
MCPs will still be relevant for those tricky monolithic services or weird business processes that aren't logged or recorded on metrics.
If your llm sees even a difference between local skill and remote MCP thats a leak in your abstraction and shortcoming of the agent harness and should not influence the decision how we need to build these system for the devs and end users. They way this comment thinks about building for agents would lead to a hellscape.
This is covered well in the article too. See "The Right Tool for the Job" and "Connectors vs. Manuals."
Perhaps the title is just clickbait. :)
I've found makefiles to be useful. I have a small skill that guides the LLM towards the makefile. It's been great for what you're talking about, but it's also a great way to make sure the agent is interacting with your system in a way you prefer.
> Focus on what tool the LLM requires to do its work in the best way.
I completely agree with you. There was a recent finding that said Agents.md outperforms skills. I'm old school and I actually see best results by just directly feeding everything into the prompt context itself.
https://vercel.com/blog/agents-md-outperforms-skills-in-our-...
> Don't focus on what you prefer: it does not matter. Focus on what tool the LLM requires to do its work in the best way.
I noticed that LLMs will tend to work by default with CLIs even if there's a connected MCP, likely because a) there's an overexposure of CLIs in training data b) because they are better composable and inspectable by design so a better choice in their tool selection.
this comment just assumes skills ori better without dealing with any of the arguments presented
low quality troll
[dead]
[dead]
I feel like the MCP conversation conflates too many things and everyone has strong assumptions that aren't always correct. The fundamental issue is between one-off vs. persistent access across sessions:
- If you need to interact with a local app in a one-off session, then use CLI.
- If you need to interact with an online service in a one-off session, then use their API.
- If you need to interact with a local app in a persistent manner, and if that app provides an MCP server, use it.
- If you need to interact with an online service in a persistent manner, and if that app provides an MCP server, use it.
Whether the MCP server is implemented well is a whole other question. A properly configured MCP explains to the agent how to use it without too much context bloat. Not using a proper MCP for persistent access, and instead trying to describe the interaction yourself with skill files, just doesn't make any sense. The MCP owner should be optimizing the prompts to help the agent use it effectively.
MCP is the absolute best and most effective way to integrate external tools into your agent sessions. I don't understand what the arguments are against that statement?