logoalt Hacker News

keedayesterday at 7:28 PM0 repliesview on HN

In my experience, it's the old time-invested vs time-saved trade off. If you're not looking at these reams of output often enough, the incentive to figure out all the flags and configs for verbosity to write these script is lower: https://xkcd.com/1205/

And because these issues are often sporadic, doing all this would be an unwanted sidequest, so humans grit their teeth and wade through the garbage manually each time.

With LLMs, the cost is effectively 0 compared to a human, so it doesn't matter. Have them write the script. In fact, because it benefits the LLM by reducing context pollution, which increases their accuracy, such measures should be actively identified and put in place.