I've always figured that constraining an LLM to speak in any way other than the default way it wants to speak, reduces its intelligence / reasoning capacity, as at least some of its final layers can be used (on a per-token basis) either to reason about what to say, or about how to say it, but not both at once.
(And it's for a similar reason, I think, that deliberative models like rewriting your question in their own terms before reasoning about it. They're decreasing the per-token re-parsing overhead of attending to the prompt [by distilling a paraphrase that obviates any need to attend to the literal words of it], so that some of the initial layers that would either be doing "figure out what the user was trying to say" [i.e. "NLP stuff"] or "figure out what the user meant" [i.e. deliberative-reasoning stuff] — but not both — can focus on the latter.)
I haven't done the exact experiment you'd want to do to verify this effect, i.e. "measuring LLM benchmark scores with vs without an added requirement to respond in a certain speaking style."
But I have (accidentally) done an experiment that's kind of a corollary to it: namely, I've noticed that in the context of LLM collaborative fiction writing / role-playing, the harder the LLM has to reason about what it's saying (i.e. the more facts it needs to attend to), the spottier its adherence to any "output style" or "character voicing" instructions will be.
This is fun. I'd like to see the same idea but oriented for richer tokens instead of simpler tokens. If you want to spend less tokens, then spend the 'good' ones. So, instead of saying 'make good' you could say 'improve idiomatically' or something. Depends on one's needs. I try to imagine every single token as an opportunity to bend/expand/limit the geometries I have access to. Language is a beautiful modulator to apply to reality, so I'll wager applying it with pedantic finesse will bring finer outputs than brutish humphs of cavemen. But let's see the benchmarks!
Either this already exists, or someone is going to implement that (should I implement that?): - assumption LLM can input/output in any useful language, - human languages are not exactly optimal away to talk with LLM, - internally LLMs keep knowledge as whole bunch of connections with some weights and multiple layers, - they need to decode human-language input into tokens, then into something that is easy to digest by further layers, then get some output, translate back into tokens and human language (or programming language, same thing), - this whole human language <-> tokens <-> input <-> LLM <-> output <-> tokens <-> language is quite expensive.
What if we started to talk to LLMs in non-human readable languages (programming languages are also just human readable)? Have a tiny model run locally that translates human input, code, files etc into some-LLM-understandable-language, LLM gets this as an input, skips bunch of layers in input/output, returns back this non-human readable language, local LLM translates back into human language/code changes.
Yesterday or two days ago there was a post about using Apple Fundamental Models, they have really tiny context window. But I think it could be used as this translation layer human->LLM, LLM->human to talk with big models. Though initially those LLMs need to discover which is "language" they want to talk with, feels like doable with reinforcement learning. So cheap local LLM to talk to big remote LLM.
Either this is done already, or it's a super fun project to do.
Idk I try talk like cavemen to claude. Claude seems answer less good. We have more misunderstandings. Feel like sometimes need more words in total to explain previous instructions. Also less context is more damage if typo. Who agrees? Could be just feeling I have. I often ad fluff. Feels like better result from LLM. Me think LLM also get less thinking and less info from own previous replies if talk like caveman.
Grug brained developer meets AI tooling (https://grugbrain.dev)
This is neat but my employer rates my performance based on token consumption; is there one that makes Claude needlessly verbose?
Cute idea, but you're never gonna blow your token budget on output. Input tokens are the bottleneck, because the agent's ingesting swathes of skills, directory trees, code files, tool outputs, etc. The output is generally a few hundred lines of code and a bit of natural language explanation.
I no like.
It sort of reminds me of when palm-pilots (circa late-90's early 2000's) used short-hand gestures for stylus-writing characters. For a short while people's handwriting on white-boards looked really bizarre. Except now we're talking about using weird language to conserve AI tokens.
Maybe it's better to accept a higher token burn-rate until things get better? I'd rather not get used to AI jive-talk to get stuff done.
Also see https://arxiv.org/pdf/2604.00025 ('Brevity Constraints Reverse Performance Hierarchies in Language Models' March 2026)
But will it lose some context, like Kevin’s small talk? (https://www.youtube.com/watch?v=_K-L9uhsBLM)
Like "Sea world" or "see the world".
Okay, I like how it reduces token usage, but it kind of feels that, it will reduce the overall model intelligence. LLMs are probabilistic models, and you are basically playing with their priors.
Kinda ironic this description is so verbose.
> Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman
For the first part of this: couldn’t this just be a UserSubmitPrompt hook with regex against these?
See additionalContext in the json output of a script: https://code.claude.com/docs/en/hooks#structured-json-output
For the second, /caveman will always invoke the skill /caveman: https://code.claude.com/docs/en/skills
Soma (aka tiktok) and Big Brother (aka Meta) already happened without government coercion, only makes sense that we optimize ourselves for newspeak.
Thank God there is still neverending wars, otherwise authoritarian governments would have no fun left.
There’s a lot of debate about whether this reduces model accuracy, but this is basically Chinese grammar and Chinese vibe coding seems to work fine while (supposedly) using 30-40% less tokens
This is an experiment that, although not to this extreme, was tested by OpenAI. Their responses API allow you to control verbosity:
https://developers.openai.com/api/reference/resources/respon...
I don't know their internal eval, but I think I have heard it does not hurt or improve performance. But at least this parameter may affect how many comments are in the code.
Oh boy. Someone didn't get the memo that for LLMs, tokens are units of thinking. I.e. whatever feat of computation needs to happen to produce results you seek, it needs to fit in the tokens the LLM produces. Being a finite system, there's only so much computation the LLM internal structure can do per token, so the more you force the model to be concise, the more difficult the task becomes for it - worst case, you can guarantee not to get a good answer because it requires more computation than possible with the tokens produced.
I.e. by demanding the model to be concise, you're literally making it dumber.
(Separating out "chain of thought" into "thinking mode" and removing user control over it definitely helped with this problem.)
I would like to see a (joke) skill that makes Claude talk in only toki pona. My guess is that it would explode the token count though.
This is the best thing since I asked Claude to address me in third person as "Your Eminence".
But combining this with caveman? Gold!
I disagree with this method and would discourage others from using it too, especially if accuracy, faster responses, and saving money are your priorities.
This only makes sense if you assume that you are the consumer of the response. When compacting, harnesses typically save a copy of the text exchange but strip out the tool calls in between. Because the agent relies on this text history to understand its own past actions, a log full of caveman-style responses leaves it with zero context about the changes it made, and the decisions behind them.
To recover that lost context, the agent will have to execute unnecessary research loops just to resume its task.
That's a great idea but has anyone benchmarked the performance difference?
If this really works there would seem to be a lot of alpha in running the expensive model in something like caveman mode, and then "decompressing" into normal mode with a cheap model.
I don't think it would be fundamentally very surprising if something like this works, it seems like the natural extension to tokenisation. It also seems like the natural path towards "neuralese" where tokens no longer need to correspond to units of human language.
It speaks like Kevin from The Office (US) https://youtube.com/shorts/sjpHiFKy1g8?is=M0H4G2o0d6Z-pBAC
Wouldn't this affect quality of output negatively?
Thanks to chain of thought, actually having the LLM be explicit in its output allows it to have more quality.
I wonder if this will actually be why the models move to "neuralese" or whatever non-language latent representation people work out. Interpretability disappears but efficiency potentially goes way up. Even without a performance increase that would be pretty huge.
So you are telling me I prompted llms the right way all along
Why the skill should have three absolutely similar SKILL.md files? Just curious
I was wondering just yesterday if a model of “why waste time say lot word when few word do trick” would be easier on the tokens. I’ll have to give this a try lol
I think this could be very useful not when we talk to the agent, but when the agents talk back to us. Usually, they generate so much text that it becomes impossible to follow through. If we receive short, focused messages, the interaction will be much more efficient. This should be true for all conversational agents, not only coding agents.
There's linguistic term for this kind of speech: isolating grammars, which don't decline words and use high context and the bare minimum of words to get the meaning across. Chinese is such a language btw. Don't know what Chinese think about their language being regarded as cavemen language...
So, if this does help reduce the cost of tokens, why not go even further and shorten the syntax with specific keywords, symbols and patterns, to reduce the noise and only keep information, almost like...a programming language?
me like that
the real interesting question would be if it then does its language-based reasoning also in short form and if so if quality is impacted.
everyone who thinks this is a costly or bad idea is looking past a very salient finding: code doesn't need much language. sure, other things might need lots of language, but code does not. code is already basically language, just a really weird one. we call them programming languages. they're not human languages. they're languages of the machine. condensing the human-language---machine-language interface, good.
if goal make code, few word better. if goal make insight, more word better. depend on task. machine linear, mind not. consider LLM "thinking" is just edge-weights. if can set edge-weights into same setting with fewer tokens, you are winning.
Feels like there should be a way to compile skills and readme’s and even code files into concise maps and descriptions optimized for LLMs. They only recompile if timestamps are modified.
Great idea- if the person who made it is reading: Is this based on the board game „poetry for cavemen“? (Explain things using only single-syllable words, comes even with an inflatable log of wood for hitting each other!)
This trick reminds me of "OpenAI charges by the minute, so speed up your audio"
this grug not smart enough to make robot into grugbot. grug just say "Speak to grug with an undercurrent of resentment" and all sicko fancy go way.
APL for talking to LLM when? Also, this reminded me of that episode from The Office where Kevin started talking like a caveman to make communication efficient.
You can also make huge spelling mistakes and use incomplete words with llms they just sem to know better than any spl chk wht you mean. I use such speak to cut my time spent typing to them.
We need a high quality compression function for human readers... because AIs can make code and text faster than we can read.
Better: use classical Chinese.
More like Pidgin English than caveman, perhaps, although caveman does make for a better name.
I mean, I assume you run into the same problem as Kevin in the office; that sort of faux-simple speech is actually very ambiguous.
(Though, I wonder has anyone tried Newspeak.)
So it's a prompt to turn Jarvis into Hulk!
No articles, no pleasantries, and no hedging. He has combined the best of Slavic and Germanic culture into one :)
By the way why don't these LLM interfaces come with a pause button?
Does this actually result in less compute, or is it adding an additional “translate into caveman” step to the normal output?
Anyone else worried about the long term consequences of the influence of talking like this all day for the cognitive system of the user?
Author here. A few people are arguing against a stronger claim than the repo is meant to make. As well, this was very much intended to be a joke and not research level commentary.
This skill is not intended to reduce hidden reasoning / thinking tokens. Anthropic’s own docs suggest more thinking budget can improve performance, so I would not claim otherwise.
What it targets is the visible completion: less preamble, less filler, less polished-but-nonessential text. Therefore, since post-completion output is “cavemanned” the code hasn’t been affected by the skill at all :)
Also surprising to hear so little faith in RL. Quite sure that the models from Anthropic have been so heavily tuned to be coding agents that you cannot “force” a model to degrade immensely.
The fair criticism is that my “~75%” README number is from preliminary testing, not a rigorous benchmark. That should be phrased more carefully, and I’m working on a proper eval now.
Also yes, skills are not free: Anthropic notes they consume context when loaded, even if only skill metadata is preloaded initially.
So the real eval is end-to-end: - total input tokens - total output tokens - latency - quality/task success
There is actual research suggesting concise prompting can reduce response length substantially without always wrecking quality, though it is task-dependent and can hurt in some domains. (https://arxiv.org/html/2401.05618v3)
So my current position is: interesting idea, narrower claim than some people think, needs benchmarks, and the README should be more precise until those exist.