logoalt Hacker News

Octoth0rpetoday at 11:29 AM2 repliesview on HN

> An over-engineered solution (complete with CLI, storage backend, documentation, unit tests) for a trivial problem which that person would've solved by an elegant bash one-liner only 3 years ago.

Importantly, I think AI companies are motivated towards the overengineered solutions as they increase the buyer's token spend. I'm not sure how we can create incentives that optimize for finding the 'right' solution, which may be the cheapest (the bash one-liner). Perhaps a widely recognized but not overly optimized for benchmark for this class of problems?


Replies

maxsilvertoday at 12:13 PM

> Importantly, I think AI companies are motivated towards the overengineered solutions as they increase the buyer's token spend.

Yes that, and also, the more complicated the solution, the more likely no one reads or reviews it too carefully, and will instead depend on an LLM to ‘read’ and ‘review it’

Even ignoring token costs, there’s a high incentive for LLMs to generate complex solutions, because those solutions generate demand for further LLM use. (You don’t really want to review that 30,000 line pull request by hand, do you?)

whazortoday at 12:11 PM

I think the model space is too competitive. People will switch if another model is significantly better.

show 1 reply