logoalt Hacker News

lubujacksontoday at 3:48 PM1 replyview on HN

Long term, I think the best value AI gives us is a poweful tool to gain understanding. I think we are going to see deep understanding turn into the output goal of LLMs soon. For example, the blocker on this project was the dense C code with 400 rules. Work with LLMs allowed the structure and understanding to be parsed and used to create the tool, but maybe an even more useful output would be full documentation of the rules and their interactions.

This could likely be extracted much easier now from the new code, but imagine API docs or a mapping of the logical ruleset with interwoven commentary - other devtools could be built easily, bug analysis could be done on the structure of rules independent of code, optimizations could be determined on an architectural level, etc.

LLMs need humans to know what to build. If generating code becomes easy, codifying a flexible context or understanding becomes the goal that amplifies what can be generated without effort.


Replies

thunfischbrottoday at 5:24 PM

Looks like a clear divide in people‘s experiences based on how they use these new tools:

1) All-knowing oracle which is lightly prompted and develops whole applications from requirements specification to deployable artifacts. Superficial, little to no review of the code before running and committing.

2) An additional tool next to their already established toolset to be used inside or alongside their IDE. Each line gets read and reviewed. The tool needs to defend their choices and manual rework is common for anything from improving documentation to naming things all the way to architectural changes.

Obviously anything in between as well being viable. 1) seems like a crazy dead-end to me if you are looking to build a sustainable service or a fulfilling career.