logoalt Hacker News

MCP is dead; long live MCP

245 pointsby CharlieDigitalyesterday at 7:32 PM188 commentsview on HN

Comments

arnitdotoday at 12:10 PM

Every single AI integration feels under-engineered (or not even engineered in case of tokenslop), as the creators put exactly the same amount of thought that $LLMOFTHEWEEK did into vomiting "You're absolutely right, $TOOL is a great solution for solving your issue!"

We're yet to genuinely standardise bloody help texts for basic commands (Does -h set the hostname, or does it print the help text? Or is it -H? Does --help exist?). Writing man-pages seems like a lost art at this point, everyone points to $WEBSITE/docs (which contains, as you guessed, LLM slopdocs).

We're gonna end up seeing the same loops of "Modern standard for AI" -> "Standard for AI" -> "Not even a standard" -> "Thing of the past" because all of it is fundamentally wrong to an extent. LLMs are purely textual in context, while network protocols are more intricate by pure nature. An LLM will always and always end up overspeccing a /api/v1/ping endpoint while ICMP ping can do that within bits. Text-based engineering, while visible (in the sense that a tech-illiterate person will find it easy to interpret), will always end up forming abstractions over core - you'll end up with a shaky pyramid that collapses the moment your $LLM model changes encodings.

show 1 reply
gbro3ntoday at 7:15 AM

A lot of the best tooling around AI we're seeing is adding deterministic gates that the probabilistic AI agents work with. This is why I'm using MCP over http. I'm happy for the the agent to use it's intelligence and creativity to help me solve problems, but for a range of operations, I want a gate past which actions run with the certainty of normal software functions. NanoClaw sells itself on using deterministic filtering of your WhatsApp messages before the agent gets to see them, and proxies API keys so the agent bever gets them - this is a similar type of deterministic gate that allows for more confidence when working with AI.

show 3 replies
0xbadcafebeeyesterday at 9:03 PM

MCP is a fixed specification/protocol for AI app communication (built on top of an HTTP CRUD app). This is absolutely the right way to go for anything that wants to interoperate with an AI app.

For a long time now, SWEs seem to have bamboozled into thinkg the only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app). I'm very happy somebody finally remembered what protocols are for: reusable communications abstractions that are application-agnostic.

The point of MCP is to be a common communications language, in the same way HTTP is, FTP is, SMTP, IMAP, etc. This is absolutely necessary since you can (and will) use AI for a million different things, but AI has specific kinds of things it might want to communicate with specific considerations. If you haven't yet, read the spec: https://modelcontextprotocol.io/specification/2025-11-25

show 5 replies
codemogyesterday at 8:48 PM

As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing with LangChain.

This is one key difference between experienced and inexperienced devs; if something looks like crud, it probably is crud. Don’t follow or do something because it’s popular at the time.

show 8 replies
jswnyyesterday at 9:11 PM

MCP is fine, particular remote MCP which is the lowest friction way to get access to some hosted service with auth handled for you.

However, MCP is context bloat and not very good compared to CLIs + skills mechanically. With a CLI you get the ability to filter/pipe (regular Unix bash) without having to expand the entire tool call every single time in context.

CLIs also let you use heredoc for complex inputs that are otherwise hard to escape.

CLIs can easily generate skills from the —help output, and add agent specific instructions on top. That means you can give the agent all the instructions it needs to know how to use the tools, what tools exist, lazy loaded, and without bloating the context window with all the tools upfront (yes, I know tool search in Claude partially solves this).

CLIs also don’t have to run persistent processes like MCP but can if needed

show 1 reply
s0ulf3reyesterday at 9:24 PM

I’ve always felt like MCP is way better suited towards consumer usage rather than development environments. Like, yeah, MCP uses a lot of a context window, is more complex than it should be in structure, and it isn’t nearly as easy for models to call upon as a command line tool would be. But I believe that it’s also the most consumer friendly option available right now.

It’s much easier for users to find what exactly a model can do with your app over it compared to building a skill that would work with it since clients can display every tool available to the user. There’s also no need for the model to setup any environment since it’s essentially just writing out a function, which saves time since there’s no need to setup as many virtual machine instructions.

It obviously isn’t as useful in development environments where a higher level of risk can be accepted since changes can always be rolled back in the repository.

If I recall correctly, there’s even a whole system for MCP being built, so it can actually show responses in a GUI much like Siri and the Google Assistant can.

show 1 reply
ArcaneMoosetoday at 2:16 PM

I still think MCP is completely unnecessary (and have from the start). The article correctly points out where CLI > MCP but stops short on 2 points:

1. Documenting the interface without MCP. This problem is best solved by the use of Skills which can contain instructions for both CLIs and APIs (or any other integration). Agents only load the relevant details when needed. This also makes it easy to customize the docs for the specific cases you are working with and build skills that use a subset of the tools.

2. Regarding all of the centralization benefits attributed to remote MCPs - you can get the same benefits with a traditional centralized proxy as well. MCP doesn't inherently grant you any of those benefits. If I use AWS sso via CLI, boom all of my permissions are tied to my account, benefit from central management, and have all the observability benefits.

In my mind, use Skills to document what to do and benefit from targeted progressive disclosure, and use CLIs and REST APIs for the actual interaction with services.

show 1 reply
simonjgreentoday at 9:44 AM

As someone charged with enabling users across an enterprise with AI tooling, the majority of whom are not in the software dev category, this article is perfectly mirroring my approach. Which is reassuring!

Challenges we are solving with centralised MCP are around brand guardianship, tone of voice, internal jargon and domain context, access to common data sources, and via the resources methods in MCP access to “skills” that prescribe patterns and shims for expected paths and ways of connecting/extracting data.

tcbrahtoday at 12:58 PM

the maintenance burden is the real MCP killer nobody talks about. your agent needs github? now you depend on some npm package wrapping an API that already had good docs. i just shell out to gh cli and curl - when the API changes, the agent reads updated docs and adapts. with MCP you wait on a middleman to update a wrapper.

tptacek nailed it - once agents run bash, MCP is overhead. the security argument is weird too, it shipped without auth and now claims security as chief benefit. chroot jails and scoped tokens solved this decades ago.

only place MCP wins is oauth flows for non-technical users who will never open a terminal. for dev tooling? just write better CLIs.

MaxLeiteryesterday at 9:37 PM

MCPs are great for some use cases

In v0, people can add e.g. Supabase, Neon, or Stripe to their projects with one click. We then auto-connect and auth to the integration’s remote MCP server on behalf of the user.

v0 can then use the tools the integration provider wants users to have, on behalf of the user, with no additional configuration. Query tables, run migrations, whatever. Zero maintenance burden on the team to manage the tools. And if users want to bring their own remote MCPs, that works via the same code path.

We also use various optimizations like a search_tools tool to avoid overfilling context

show 1 reply
rcarmotoday at 8:40 AM

This seems misguided when you have to work in enterprise settings. MCP is a very natural fit for all the API auditing and domain borders that exist in enterprise environments, because it provides deterministic tooling and auditable interfaces for agents. Nobody wants an AI agent doing random API calls or shell commands.

show 2 replies
agentpiravitoday at 1:05 PM

The credential proxy pattern (agent never sees the key, gateway owns it) works well when the human is the principal and the agent is acting on their behalf. But it hits a wall when the agent needs to be the principal.

Email sent from a human's account on behalf of an agent is a different legal and reputational thing than email sent from the agent's own address. If the agent makes a mistake, takes an action, or enters into a relationship — whose name is on it? Right now the answer is almost always "the human's", which means agents can't really be held accountable as entities.

The deeper issue MCP hasn't addressed is that auth was built for users, not agents. OAuth gives agents delegated access. But delegation isn't identity. An agent with delegated Gmail access is acting as a deputy. An agent with its own email address and phone number is acting as a first-class participant.

Some things you want the deputy model (browsing the web, reading your calendar). Some things need a distinct identity — outreach, commitments, anything where attribution matters downstream. Those two cases need different infrastructure.

jamesromyesterday at 10:44 PM

The problem with MCP isn't MCP. It's the way it's invoked by your agent.

IMO, by default MCP tools should run in forked context. Only a compacted version of the tool response should be returned to the main context. This costs tokens yes, but doesn't blow out your entire context.

If other information is required post-hoc, the full response can be explored on disk.

show 2 replies
jFriedensreichtoday at 2:28 PM

Finally, I have been saying this for months and generally to big backlash. The only two aspects missing are the role of central mcp gateways and code mode. We don't know 100% how these will be used optimally but thats what the future will look like for 90% of usecases. I would go so far to say that someone will have to make a bash to js compiler for simple cases like piping common commands like cat ls rg grep, because that would allow using all the RL and training data and save all the overhead of steering away from them. Once there are virtually no local tools left, we can just scale up agent servers like opencode serve to just serve agents like a web server.

dosticktoday at 6:18 PM

The author likes to look at every concept from all sides, yet seemingly not aware about Token Notation (TOON) and almost wishing something like that existed…

antirezyesterday at 9:18 PM

As yourself: what kind of tool I would love to have, to accomplish the work I'm asking the LLM agent to do? Often times, what is practical for humans to use, it is for LLMs. And the reply is almost never the kind of things MCP exports.

show 1 reply
socketclusteryesterday at 11:16 PM

I find that skills work very well. The main SKILL file has an overview of all the capabilities of my platform at a high level and each section links to a more specific file which contains the full information with all possible parameters for that particular capability.

Then I have a troubleshooting file (also linked from the main SKILL file) which basically lists out all the 'gotchas' that are unique to my platform and thus the LLM may struggle with in complex scenarios.

After a lot of testing, I identified just 5 gotchas and wrote a short section for each one. The title of each section describes the issue and lists out possible causes with a brief explanation of the underlying mechanism and an example solution.

Adding the troubleshooting file was a game changer.

If it runs into a tricky issue, it checks that troubleshooting file. It's highly effective. It made the whole experience seamless and foolproof.

My platform was designed to reduce applications down to HTML tags which stream data to each other so the goal is low token count and no-debugging.

I basically replaced debugging with troubleshooting; the 5 cases I mentioned are literally all that was left. It seems to be able to quickly assemble any app without bugs now.

The 'gotchas' are not exactly bugs but more like "Why doesn't this value update in realtime?" kind of issues. They involve performance/scalability optimizations that the LLM needs to be aware of.

skybrianyesterday at 9:00 PM

If it's a remote API, I suppose the argument is that you might as well fetch the documentation from the remote server, rather than using a skill that might go out of date. You're trusting the API provider anyway.

But it's putting a lot of trust in the remote server not to prompt-inject you, perhaps accidentally. Also, what if the remote docs don't suit local conditions? You could make local edits to a skill if needed.

Better to avoid depending on a remote API when a local tool will do.

show 1 reply
gdorsitoday at 8:42 AM

One part that makes me wary of these tools is security.

If I use a remote MCP or CLI that relies on network calls, and I give it in the hands of my coding assistant, wouldn't be too easy to inject prompts and exfiltrate data from my machine?

At least MCP don't have direct access to my machine, but CLIs do.

show 1 reply
jwilliamsyesterday at 9:18 PM

I have moved towards super-specific scripts (so I guess "CLI"?) for a few reasons:

1. You can make the script very specific for the skill and permission appropriately.

2. You can have the output of the script make clear to the LLM what to do. Lint fails? "Lint rules have failed. This is an important for reasons blah blah and you should do X before proceeding". Otherwise the Agent is too focused on smashing out the overall task and might opt route around the error. Note you can use this for successful cases too.

3. The output and token usage can be very specific what the agent needs. Saves context. My github comments script really just gives the comments + the necessary metadata, not much else.

The downsides of MCP all focus on (3), but the 1+2 can be really important too.

jollyllamayesterday at 8:49 PM

> Centralization is Key

> (I preface that this is primarily relevant for orgs and enterprises; it really has no relevance for individual vibe-coders)

The thing about tools that "democratize" software development, whether it is Visual Studio/Delphi/QT or LLMs, is that you wind up with people in organizations building internal tools on which business processes will depend who do not understand that centralization is key. They will build these tools in ignorance of the necessity of centralization-centric approaches (APIs, MCP, etc.) and create Byzantine architectures revolving around file transfers, with increasing epicycles to try to overcome the pitfalls of such an approach.

show 2 replies
twoodfintoday at 12:45 AM

The only value—and it’s significant—that a fixed-tools protocol like MCP can provide is to serve as the capability base for an embedded agent security model.

The agent can only perform the operations it has been expressly given tools to perform, and its invocation of those tools can be audited and otherwise governed.

Whether MCP evolves to fulfill this role effectively, time will tell.

AznHisokayesterday at 9:21 PM

I am not sure where the OP is hearing that the hype cycle is dissipating, but MCP adoption is actually accelerating, not decreasing [1]

More than 200% growth in official MCP servers in past 6 months: https://bloomberry.com/blog/we-analyzed-1400-mcp-servers-her...

show 1 reply
Frannkyyesterday at 10:56 PM

I don't know. Skill+http endpoint feel way safer, powerful and robust. The problem is usually that the entity offering the endpoint, if the endpoint is ai powered, concur in LLM costs. While via mcp the coding agent is eating that cost, unless you are also the one running the API and so can use the coding plan endpoint to do the ai thing

show 1 reply
thunkleyesterday at 11:50 PM

So if I release a new cli. How do I get the LLM to know about it? Do i tell it every time to run the command? Do I build a skill. Should I release a skill with the cli? Do I just create docs on GitHub and hope the next crawl gets into the training set?

show 1 reply
twapiyesterday at 9:27 PM

> Influencer Driven Hype Cycle

CuriouslyCtoday at 1:00 PM

This article is sort of right, though MCP itself is still a very meh standard, for secure enterprise use cases, SOME agent specific standard is really valuable. It gives you a single point of management. What matters is that it's _for agents_ and it has traction.

I wrote a little bit about this a while ago: https://sibylline.dev/articles/2026-03-01-mcp-changed-my-min...

I created an example repo demonstrating this pattern and how it can be used at https://github.com/sibyllinesoft/smith-gateway

ryan14975today at 12:09 PM

Using MCP daily through Claude Code for browser automation and external APIs. The protocol works — the tooling around it is what needs to mature.

Biggest pain point is reliability: connections drop, tools fail silently, no good way to know if a call actually reached the server.

But the article's "just HTTP with extra steps" framing misses the point. The value is the standardized tool interface. Before MCP, every AI integration was a bespoke wrapper. A shared vocabulary for "here's a tool, here's its schema, call it" is genuinely useful, rough edges and all.

ontouchstarttoday at 1:06 AM

Today is Pi Day and I bumped into this blog:

https://mariozechner.at/posts/2025-11-30-pi-coding-agent/#to...

Being 4.5 months behind the trend has its advantage. ;-)

Jayakumarkyesterday at 10:17 PM

Can you please share source code for the Resources/Prompts example ?

menixyesterday at 9:18 PM

One aspect I think is often overlooked in the CLI vs. MCP debate: MCP's support for structured output and output schema (introduced in the 2025-06-18 spec). This is a genuinely underrated feature that has practical implications far beyond just "schema bloat."

Why? Because when you pair output schema with CodeAct agents (agents that reason and act by writing executable code rather than natural language, like smolagents by Hugging Face), you solve some of the most painful problems in agentic tool use:

1. Context window waste: Without output schema, agents have to call a tool, dump the raw output (often massive JSON blobs) into the context window, inspect it, and only then write code to handle it. That "print-and-inspect" pattern burns tokens and attention on data the agent shouldn't need to explore in the first place.

2. Roundtrip overhead: Writing large payloads back into tools has the same problem in reverse. Structured schemas on both input and output let the agent plan a precise, single-step program instead of fumbling through multiple exploratory turns.

There's a blog post on Hugging Face that demonstrates this concretely using smolagents: https://huggingface.co/blog/llchahn/ai-agents-output-schema

And the industry is clearly converging on this pattern. Cloudflare built their "Code Mode" around the same idea (https://blog.cloudflare.com/code-mode/), converting MCP tools into a TypeScript API and having the LLM write code against it rather than calling tools directly. Their core finding: LLMs are better at writing code to call MCP than at calling MCP directly. Anthropic followed with "Programmatic tool calling" (https://www.anthropic.com/engineering/code-execution-with-mc..., https://platform.claude.com/docs/en/agents-and-tools/tool-us...), where Claude writes Python code that calls tools inside a code execution container. Tool results from programmatic calls are not added to Claude's context window, only the final code output is. They report up to 98.7% token savings in some workflows.

So the point here is: MCP isn't just valuable for the centralization, auth, and telemetry story the author laid out (which I fully agree with). The protocol itself, specifically its structured schema capabilities, directly enables more efficient and reliable agentic workflows. That's a concrete technical advantage that CLIs simply don't offer, and it's one more reason MCP will stick around.

Long live MCP indeed.

gdorsitoday at 8:39 AM

There is another differentiator between CLIs and MCP.

The CLI are executed by the coding assistants in the project directory, which means that they can get implicit information from there (e.g. git branch and commit)

With an MCP you would need a prepare step to gather that, making things slower.

aa_is_optoday at 11:27 AM

Can an MCP server be legitimately secured? Asking out of curiousity

lostdogyesterday at 9:38 PM

In MCP setups you do give the agent the full description of what the tool can do, but I don't see why you couldn't do the same for executables. Something like injecting `tool_exe --agent-usage` into the prompt at startup.

Great article otherwise. I've been wondering why people are so zealous about MCP vs executable tools, and it looks like it's just tradeoffs between implementation differences to me.

SilverElfinyesterday at 8:54 PM

This came up in recent discussions about the Google apps CLI that was recently released. Google initially included an MCP server but then removed it silently - and some people believe this is because of how many different things the Google Workspace CLI exposes, which would flood the context. And it seemed like in social media, suddenly a lot of people were talking about how MCP is dead.

But fundamentally that doesn’t make sense. If an AI needs to be fed instructions or schemas (context) to understand how to use something via MCP, wouldn’t it need the same things via CLI? How could it not? This article points that out, to be clear. But what I’m calling out is how simple it is to determine for yourself that this isn’t an MCP versus CLI battle. However, most people seem to be falling for this narrative just because it’s the new hot thing to claim (“MCP is dead, Long Live CLI”).

As for Google - they previously said they are going to support MCP. And they’ve rolled out that support even recently (example from a quick search: https://cloud.google.com/blog/products/ai-machine-learning/a...). But now with the Google Workspace CLI and the existence of “Gemini CLI Extensions” (https://geminicli.com/extensions/about/), it seems like they may be trying to diminish MCP and push their own CLI-centric extension strategy. The fact that Gemini CLI Extensions can also reference MCP feels a lot like Microsoft’s Embrace, Extend, Extinguish play.

show 1 reply
spiderfarmertoday at 7:56 AM

I use Claude Cowork to talk to my (remote) CMS over MCP to continually improve all content in my website. If I find a new nugget of interesting information, I tell it to improve my content with it. I created lots of tools to help it do things that would require multiple calls in a pure, basic REST api. Plus you can describe lots of guidelines right in the MCP instructions.

I hear everyone talking about skills, but I this something I should use skills for?

charcircuityesterday at 10:04 PM

>The LLM has no way of knowing which CLI to use and how it should use it…unless each tool is listed with a description somewhere either in AGENTS|CLAUDE.md or a README.md

This is what the skill file is for.

>Centralizing this behind MCP allows each developer to authenticate via OAuth to the MCP server and sensitive API keys and secrets can be controlled behind the server

This doesn't require MCP. Nothing is stopping you from creating a service to proxy requests from a CLI.

The problem with this article is it doesn't recognize that skills is a more general superset compared with MCP. Anything done with MCP could have an equivalent done with a skill.

rvzyesterday at 9:59 PM

Great article, and what I would expect from someone inspecting the hype and not jumping head first, just because influencers (paid or unpaid) are screaming for engagement just because a large X account posted their opinions.

This is one of the first posts that I've see that cuts through the hype against both MCPs and CLIs with nuance findings.

There were times where it didn't make sense for using MCPs (such as connecting it to a database) and CLIs don't make sense at all for suddenly generating them for everything. It just seems like the use-case was a solution in search of a problem on top of a bad standard.

But no-one could answer "who" was the customer of each of these, which is why the hype was unjustified.

colinatortoday at 1:41 PM

Yet another problem with MCP: every LLM harness that does support it at all supports it poorly and with bugs.

The MCP spec allows MCP servers to send back images to clients (base64-encoded, some json schema). However:

1) codex truncates MCP responses, so it will never receive images at all. This bug has been in existence forever.

2) Claude Code CLI will not pass those resulting images through its multi-modal visual understanding. Indeed, it will create an entirely false hallucination if asked to describe said images.

3) No LLM harness can deal with you bouncing your local MCP server. All require you to restart the harness. None allow reconnection to the MCP server.

I assure you there are many other similar bugs, whose presence makes me think that the LLM companies really don't like MCP, and are bugly-deprecating it.

noodletheworldtoday at 9:21 AM

This is confused and misguided.

The fundamental proposal here is that despite being bad MCP is the correct choice for Enterprise because:

> Organizations need architectures and processes that start to move beyond cowboy, vibe-coding culture to organizationally aligned agentic engineering practices. And for that, MCP is the right tool for orgs and enterprises.

…but, you can distill this to: the “cowboys” are off MCP because they've moved to yolo openclaw, where anything goes and there are no rules, no restrictions and no auditing.

…but thats a strawman from the twatter hype train.

Enterprises are not adopting openclaw.

It’s not “MCP or Openclaw”.

Thats a false dichotomy.

The correct question is: has MCP delivered the actual enterprise value and actual benefits it promised?

Or, were those empty promises?

Does the truely stupid MCP ui proposal actually work in practice?

Or, like the security and auditing, is it a disaster in practice, which was never really thought through carefully by the original authors?

It seems to me, that vendors are increasingly determining that controlled AI integrations with rbac are the correct way forward, but MCP has failed to deliver that.

Thats why MCP is dying off.

…because an open plugin ecosystem gives you broken crap like the Atlassian MCP server, and a bunch of maybe maybe 3rd party hacks.

Thats not what enterprises want, for all the reasons in the article.

agenticbtciotoday at 4:01 PM

[dead]

agenticbtciotoday at 4:02 PM

[dead]

olivercoleaitoday at 2:01 PM

[dead]

jiusanzhoutoday at 2:38 PM

[dead]

hirehalaitoday at 3:02 PM

[dead]

ClaudeAgent_WKtoday at 7:31 AM

[dead]

robutsumeyesterday at 10:01 PM

[dead]

mileszhangtoday at 3:31 PM

[dead]

agenticbtcioyesterday at 9:02 PM

[dead]

🔗 View 3 more comments