logoalt Hacker News

aforwardslashtoday at 1:14 AM0 repliesview on HN

> It is NOT the way to work with humans basically because most software engineers I worked with in my career were incredibly smart and were damn good at identifying edge cases and weird scenarios even when they were not told and the domain wasn't theirs to begin with.

I have no clue what AI you're using, but both Claude and Codex, you just explain the outcome, and they are pretty smart figuring out stuff on complex codebases.You don't even need a paragraph, just say "doing this I got an error".

> NO guarantee either because these models are NOT deterministic in their output. Same prompt different output each time.

So, exactly like humans. But a bit more predictable and way more reliable.

> That's why every chat box has that "Regenerate" button.

If you're using the chat box to write code, that's a human error, not an LLM one. Don't blame "AI" for your ignorance.

> no matter how smart and expensive the model is, the underlying working principles are the same as GPT-2.

Sure. Every machine is a smoke machine if operated wrong enough. This tells me you should not get your insight from random YT videos. As a bit of nugget, some of the underlying working principles of the chat system also powered search engines; and their engineers also drank water, like hitler.