logoalt Hacker News

ozimtoday at 2:45 PM1 replyview on HN

There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.

Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.

If you throw garbage in you will at some point have problems.

We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.


Replies

coffeefirsttoday at 3:00 PM

Yep. I hate to predict the future but I’m betting on small, open models, used as tools here and there. Which is great, you can get 90% of the speed up with 5-10% of the cost once you account for how time consuming it is to make sense of and fix the output.

The economics and security model on full agents running in loops all day may come home to roost faster than expertise rot.