logoalt Hacker News

jongjongyesterday at 11:30 PM0 repliesview on HN

This is well put. If the LLM gets the type wrong, then we're already discussing a failure scenario with a feedback loop involving back-and-forth changes.

LLMs are not really good at this. The idea that LLMs benefit from TypeScript is a case of people anthropomorphizing AI.

The kinds of mistakes AI makes are very different. It's WAY better than humans at copying stuff verbatim accurately and nailing the 'form' of the logic. What it struggles with is 'substance' because it doesn't have a complete worldview so it doesn't fully understand what we mean or what we want.

LLMs struggle more with requirements engineering and architecture since architecture ties into anticipating requirements changes.