logoalt Hacker News

mft_yesterday at 12:34 PM0 repliesview on HN

It's not black/white. There's are scales of complexity and innovation, and at the moment, the LLMs are mostly good (with obvious caveats) at helping with the lower end of the complexity scale, and arguably almost nowhere on the innovation scale.

If, as a principal engineer, you were performing basic work that can easily be replicated by an LLM, then you were wasted and mistasked.

Firstly, high-end engineers should be working on the hard work underlying advances in operating systems, compilers, databases, etc. Claude currently couldn't write competitive versions of Linux, GCC (as recently demonstrated), BigQuery, or Postgres.

Secondly, and probably more importantly, LLMs are good at doing work in fields already discovered and demonstrated by humans, but there's little evidence of them being able to make intuitive or innovative leaps forwards. (You can't just prompt Claude to "create a super-intelligent general AI"). To see the need for advances (in almost any field) and to make the leaps of innovation or understanding needed to achieve those advances still takes smart (+/- experienced) humans in 2026. And it's humans, not LLMs, that will make LLMs (or whatever comes after) better.

Thought experiment: imagine training a version of Claude, only all information (history, myriad research, tutorials, YouTube takes and videos, code for v1, v2, etc.) related to LLMs is removed from the training data. Then take that version and prompt it to create an LLM. What would happen?