logoalt Hacker News

avaertoday at 10:40 AM3 repliesview on HN

That was my first thought too -- instead of talk like a caveman you could turn off reasoning, with probably better results.

Additionally, LLMs do not actually operate in text; much of the thinking happens in a much higher dimensional space that just happens to be decoded as text.

So unless the LLM was trained otherwise, making it talk like a caveman is more than just theoretically turning it into a caveman.


Replies

DrewADesigntoday at 10:51 AM

> much of the thinking happens in a much higher dimensional space that just happens to be decoded as text.

What do you mean by that? It’s literally text prediction, isn’t it?

show 3 replies
vova_hn2today at 11:25 AM

> instead of talk like a caveman you could turn off reasoning, with probably better results

This is not how the feature called "reasoning" work in current models.

"reasoning" simply let's the model output and then consume some "thinking" tokens before generating the actual output.

All the "fluff" tokens in the output have absolutely nothing to do with "reasoning".

throw83849494today at 11:19 AM

You obviously do not speak other languages. Other cultures have different constrains and different grammar.

For example thinking in modern US English generates many thoughts, to keep correct speak at right cultural context (there is only one correct way to say People Of Color, and it changes every year, any typo makes it horribly wrong).

Some languages are far more expressive and specialized in logical conditions, conditionals, recursion and reasoning. Like eskimos have 100 words for snow, but for boolean algebra.

It is well proven that thinking in Chinese needs far less tokens!

With this caveman mod you strip out most of cultural complexities of anglosphere, make it easier for foreigners and far simpler to digest.

show 1 reply