> Figuring out how to trust AI-written code faster is the project of software engineering for the next few years, IMO
Replace AI written with “cheap dev written” and think about why that isn’t already true.
The bottleneck is a competent dev understanding a project. Always has been.
Another fundamental flaw is you can’t trust LLMs. It’s fundamentally impossible compared to the way you trust a human. Humans make mistakes. LLMs do not. Anything “wrong” they do is them working exactly as designed.
>Humans make mistakes. LLMs do not. Anything “wrong” they do is them working exactly as designed.
This requires a redefinition of the term mistake, no?