People complain a lot about LLM-written articles, but the human comments here on HN are far worse. Mostly a bunch of people extremely proud of themselves for not reading an LLM-written article, and then a bunch of people who take it at face value and make the model seem almost useful, and one comment that actually looked at other benchmarks. Good 'ol humanity, good at.. being emotional... and not doing analysis.....
The article makes some good points about model design (how different size models within a family can get similar results, how to filter out hallucination, math result reinforcement), so that's worth understanding. It's analyzing a paper, which only discussed 3 sizes of the same model family. But what the article doesn't say is, compared to other model families, Granite 4.1 8B sucks. The only benchmark it does well at compared to other models is non-hallucination and instruction following. Qwen 3.5 4B (among other models) easily outclass it on every other metric.
This article teaches a valuable lesson about reading articles in general. You can take useful information away from them (yes, despite being written by LLM). But you should also use critical thinking skills and be proactive to see if the article missed anything you might find relevant.
>> The only benchmark it does well at compared to other models is non-hallucination and instruction following.
I think instruction following is going to be the most useful thing these models do. Add a voice interface and access to a bunch of simple, straight-forward devices or APIs and you have a mildly useful assistant. If that can be done in 8B parameters it will soon run on edge devices. That's solid usefulness.
The problem is the signal/noise ratio in these articles. If the AI has written the article, then this same info could have been generated by my own AI, but tailored to my needs. So what, exactly, is the new info that this article is generating that I can use to consult with my AI? That's what I want to get out of this interaction.
Maybe my point is something on the lines of "Just send me the prompt"[0]
> people complain a lot about LLM-written articles, but the human comments here on HN are far worse.
No, they aren't.
You are comparing writing produced with little to no effort to writing produced with the minimal effort required to communicate.
It's reasonable for people to complain that they are presented material that not even the author thought was worth the effort.
"The article makes some good points about model design"
But how can I tell if those are good points or not?
I don't want to invest time in reading something if the presence of those "good points" depends on a roll of the dice.
> the human comments here on HN are far worse
I already assume some comments here are LLM written.
> But what the article doesn't say is, compared to other model families, Granite 4.1 8B sucks.
Right. This just says that Granite 4.1 8B is better than a previous version, Granite 4.0-H-Small, which has 32B, 9B active.
So, they made a less bad model than before. But that doesn't tell you anything about how it compares with other models.
>Mostly a bunch of people extremely proud of themselves for not reading an LLM-written article
I'm not sure it's proud as much as people voicing displeasure with the uncertainty about what went into the LLM prompt. This may have been a 1 sentence prompt, or it may have been some well researched background that simply reformatted it. Why waste minutes-hours on verifying it if it's possible someone could have spent 10 second on it? It's very easy to see their point.
People seem to indicate people they disagree with voicing their opinion about anything lately is some auto-fellatio, I wonder what causes them to think this way.
The thing is it's just a bunch of other original content that has been chewed up and regurgitated into something "new". Just show us the original content instead. This is by definition, slop. https://huggingface.co/blog/ibm-granite/granite-4-1
[dead]
The pro LLM rant is weird, LLMs "hallucinate" in creating detailed elaborate lies, the frontier models still do this egregiously, an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked
I'm using Gemini 3.1 pro to help me research my thesis, it still with search enabled and on pro mode, invents entire papers that don't exist, and lies about the contents of existing papers to relate them to the context or to appease me, if I submitted an LLM written article based on the results its given me 80% of the article would be lies
Commenting to complain that the article is LLM written is helpful too since some people aren't able to distinguish