logoalt Hacker News

bigstrat2003yesterday at 8:25 PM0 repliesview on HN

We can't even trust LLMs to get basic logic right, or even the language syntax sometimes. They reliably generate code worse than a human would write, and have zero reasoning ability. Anyone who thinks they can model something complicated is either uncritically absorbing hype or has a financial stake in convincing people of the hype.