So I've been working heavily on redesigning the CLI for where I currently work. I took the approach of building it from the ground up to be agent-first, primarily because we already had a good sense on what was missing for the human sense but any agent implications were entirely unknown. I'm really happy with that decision and we've ended up with a much better human experience as a result too. I plan to write up our experience at some point, but in the interim a few comments on the linked principles.
When I worked at Heroku basically all of these principles were true (though usually described slightly differently or for different reasons) back then too. These are just good CLI design principles, nothing agent-native about them. Build small sharp commands that don't require interactivity, follow *nix conventions so users can pipe in/out results to build workflows beyond what you initially imagined, provide useful help and examples, if there's a reasonable guess about the next thing a person should do offer it as a suggestion, be consistent in your terminology, be consistent in data format (e.g., don't expect a shortform name of a resource as the input in one place and the integer ID in another), for information that is important for the context in which to execute a command (e.g., which user, which org, etc) provide an environment level config and a per-command config option.
Just lots of generally helpful advice for people. Turns out it's helpful to agents too.
Something that seems like agent-specific conventional wisdom that I'm not fully bought into: JSON as the output format. For all but the most trivial outputs the LLM does not actually seem to want JSON output and will instead jump through various hoops to turn it into something it can parse more easily. We experimented with TOON[1] as a format and immediately confirmed the reduced token output claims. However when benchmarking actual real use cases TOON performed worse than both JSON and having the LLM just consume the human output. Digging further into that was eye-opening as it revealed the reason JSON did so well was less to do with the LLM understanding JSON and more its knowledge of the extensive ecosystem around JSON as a format that already exists. Looking at all the various tool calls we could see it'd make heavy use of piping JSON data to `jq`, `cut`, `awk`, `sort`, `wc`, etc. to get the data into the shape it needed. Failing that it would fall back to writing temporary python scripts to get it into the correct shape.
Capturing all of those logs to understand the performance differences felt like a form of usability testing we used to do at Heroku too. I suddenly saw the way someone (something in this case) was using the tool in ways I didn't entirely expect. Many of them to essentially get answers to perfectly reasonable questions that we should be surfacing in a better way to both humans and agents alike. It's like I managed to squash hundreds of usability tests into a couple of days. It was pretty simple to add additional flexibility into the CLI commands and clearer messaging in other places which drastically reduced the need for the LLM to post-process the data no matter what format they received it in.
So we still support JSON as a data format because it's genuinely useful for a bunch of reasons. But we also have something more LLM friendly (TOON-like, but not entirely compliant in specific circumstances where we can see it's inefficient) to be as efficient with token usage as we can be. That's about the only agent-only addition to the CLI in the end. Despite building it agent-first, it's helped us get to a better human product.