The article seems quite editorialized, shifting between describing "large-scale AI models" and "neural network-based approaches".
The underlying paper itself is more precise, comparing against LUAR, a 2021 method based on bert-style embeddings (i.e. a model with 82M parameters, which is 0.2% the size of e.g. the recent OS Gemma models). I don't fault the authors of the paper at all for this, their method is interesting and more interpretable! But you can check the publication history, their paper was uploaded originally in 2024: https://arxiv.org/abs/2403.08462
A good example of why some folks are bearish on journals.
"AI bad" seems to sell in some circles, and while there are many level-headed criticisms to be made of current AI fads, I don't think this qualifies.
If there's one problem that LLMs have solved, it's language. While an LLM may hallucinate, it does so in grammatically correct English sentences. Additionally, even the local version of gemma-4-26B can seamlessly switch between languages in the midst of a conversation while maintaining context. That's perhaps the most exciting part for me: We have a bonafide universal translator (that's Star Trek territory) and people seem more focused on its factual accuracy.
I might be misinterpreting but the LUAR model (which is a transformer) seems to do decently well
https://www.nature.com/articles/s41599-025-06340-3/figures/2
I wonder if this approach can be used to determine whether a text was generated by a specific LLM.
Ha! To think that we're finally back to asking ourselves why we are using generative models for categorization and extraction. I wonder how much money has collectively been wasted by companies wittling away at square pegs.
It should be obvious that LLMs would be able to beat this with ease. Not sure why this paper deliberately skipped comparing to current LLMs
Example of LLMs doing well in similar tasks: https://arxiv.org/abs/2602.16800
Using LLMs for everything is going to be seen as a big fad in a few years. First we try them for everything, then we find what use cases actually make sense, then we scale back. Woe betide our 401(k)s when it happens, though.