logoalt Hacker News

eggplantinytoday at 12:53 AM2 repliesview on HN

I'm looking at this from a slightly different level of abstraction.

The CLI approach definitely has practical benefits for token reduction. Not stuffing the entire schema into the runtime context is a clear win. But my main interest lies less in "token cost" and more in "how we structure the semantic space."

MCP is fundamentally a tool-level protocol. Existing paradigms like Skills already mitigate context bloat and selection overhead pretty well via tool discovery and progressive disclosure. So framing this purely as "MCP vs CLI" feels more like shifting the execution surface rather than a fundamental architectural shift.

The direction I'm exploring is a bit different. Instead of treating tools as the primary unit, what if we normalize the semantic primitives above them (e.g., "search," "read," "create")? Services would then just provide a projection of those semantics. This lets you compress the semantic space itself, expose it lazily, and only pull in the concrete tool/CLI/MCP adapters right at execution time.

You can arguably approximate this with Skills, but the current mental model is still heavily anchored to "tool descriptions"—it doesn't treat normalized semantics as first-class citizens. So while the CLI approach is an interesting optimization, I'm still on the fence about whether it's a real structural paradigm shift beyond just saving tokens.

Ultimately, shouldn't the core question be less about "how do we expose fewer tools," and more about "how do we layer and compress the semantic space the agent has to navigate?"


Replies

charcircuittoday at 1:55 AM

>what if we normalize the semantic primitives above them (e.g., "search," "read," "create")?

Trying to dictate the abstractions that should be used is not bitter lesson pilled.

jwpapitoday at 12:57 AM

ports & adapters :)

show 1 reply