I wonder so often about many new CLI tools whose primary selling point is their speed over other tools. Yet I personally have not encountered any case where a tool like jq feels incredibly slow, and I would feel the urge to find something else. What do people do all day that existing tools are no longer enough? Or is it that kind of "my new terminal opens 107ms faster now, and I don't notice it, but I simply feel better because I know"?
Deal with really big log files, mostly.
If you work at a hyperscaler, service log volume borders on the insane, and while there is a whole pile of tooling around logs, often there's no real substitute for pulling a couple of terabytes locally and going to town on them.
We parse JSON responses for dashboards, alerting, etc. Thousands of nodes, depending on the resolution of your monitoring you could see improvements here.
It's a simple loop:
- Someone likes tool X
- Figures, that they can vibe code alternative
- They take Rust for performance or FAVORITE_LANG for credentials
- Claude implements small subset of features
- Benchmark subset
- Claim win, profit on showcase
Note: this particular project doesn't have many visible tells, but there's pattern of overdocumentation (17% comment-to-code ratio, >1000 words in README, Claude-like comment patterns), so it might be a guided process.
I still think that the project follows the "subset is faster than set" trend.
You don't know something is slow until you encounter a use case where the speed becomes noticeable. Then you see the slowness across the board. If you can notice that a command hasn't completed and you are able to fully process a thought about it, it's slow(er than your mind, ergo slow!).
Usually, a perceptive user/technical mind is able to tweak their usage of the tools around their limitations, but if you can find a tool that doesn't have those limitations, it feels far more superior.
The only place where ripgrep hasn't seeped into my workflow for example, is after the pipe and that's just out of (bad?) habit. So much so, sometimes I'll do this foolishly rg "<term>" | grep <second filter>; then proceed to do a metaphoric facepalm on my mind. Let's see if jg can make me go jg <term> | jq <transformation> :)
Race between ripgrep and ugrep is entertaining.
Optimization = good
Prioritizing SEO-ing speed over supporting the same features/syntax (especially without an immediately prominent disclosure of these deficiencies) = marketing bullshit
A faster jq except it can't do what jq does... maybe I can use this as a pre-filter when necessary.
Speed is a quality in itself. We are so bugged down by slow stuff that we often ignore that and don’t actively search for another.
But every now and then a well-optimised tool/page comes along with instant feedback and is a real pleasure to use.
I think some people are more affected by that than others.
Obligatory https://m.xkcd.com/1205
Not every use case of jq is a person using it interactively in their terminal, believe it or not.
For people chewing through 50GB logs or piping JSON through cron jobs all day, a 2x speedup is measurable in wall time and cloud bill, not just terminal-brain nonsense. Most people won't care.
If jq is something you run a few times by hand, a "faster jq" is about as compelling as a faster toaster. A lot of these tools still get traction because speed is an easy pitch, and because some team hit one ugly bottleneck in CI or a data pipeline and decided the old tool was now unacceptable.
[dead]
[dead]
[dead]
I process TB-size ndjson files. I want to use jq to do some simple transformations between stages of the processing pipeline (e.g. rename a field), but it so slow that I write a single-use node or rust script instead.