Seeing the usual LLM hypers angry replying to this on twitter is such a tell. Just like the comments on the LLM poisoning articles, some people just can't accept that some people don't like LLMs and get upset when you put any amount of hindrance to their rapid acceptance.
It's also hilarious that they complain about this because, from what I've seen, most LLM hypers will talk about something being irrelevant or taken over by AI with no understanding of what that something really is or involves.
> some people don't like LLMs
It's not even that they "don't like LLMs". They just don't like academic fraud! If references were fabricated with a Markov chain it would be just as bad!
Crazy that this is graytexted. So basically HN consensus is that we need to be hyper and accelerate llm adoption everywhere.
Bonkers. At the same time peak hn
It's hard for me to even understand their perspective. Researching references for a published academic paper isn't some incidental busywork task, it's supposed to be a core part of doing research which is the core of the job. If you don't have sympathy for someone who, say, paid a person on Fiverr to cook up a paper rather than writing it themselves and then didn't even bother to check the references, why is using an LLM and not checking any better?