logoalt Hacker News

royal__today at 1:47 AM0 repliesview on HN

I agree, but I think it's for a different reason than what the author says: LLMs are a very leaky abstraction compared to other levels, meaning it's much harder to convey the true intent of logic you are trying to encode through natural language, and often by doing so you are just relying on the LLM to "get it right", which is inherently messy business. Oftentimes, that leakiness just doesn't matter that much. Other times, it does.