logoalt Hacker News

noodletheworldlast Sunday at 2:44 AM2 repliesview on HN

> What works much better is to tell the model to take a step back and re-evaluate.

I desperately hate that modern tooling relies on “did you perform the correct prayer to the Omnissiah”

> to add some entropy to get it away from the local optimum

Is that what it does? I don't think thats what it does, technically.

I think thats just anthropomorphizing a system that behaves in a non deterministic way.

A more menaingful solution is almost always “do it multiple times”.

That is a solution that makes sense sometimes because the system is prob based, but even then, when youre hitting an opaque api which has multiple hidden caching layers, /shrug who knows.

This is way I firmly believing prompt engineering and prompt hacking is just fluff.

Its both mostly technically meaningless (observing random variance over a sample so small you cant see actual patterns) and obsolete once models/apis change.

Just ask Claude to rewrite your request “as a prompt for claude code” and use that.

I bet it wont be any worse than the prompt you write by hand.


Replies

tclancylast Sunday at 2:56 AM

Other than AI (and possibly npm packaging) where do you feel you have to rely on prayer? Additionally, most of human history has been the story of scientific advancement to a different point where people rely on prayer, so maybe suck it up buttercup is the best advice here? &emdash;

nprateemlast Sunday at 3:30 AM

It definitely overcompensates to the point of defensiveness. They have all done so for years.

"Why did you do that?" (Me, just wanting to understand)

"You're right I should have done the opposite" (starts implementing the opposite without seeking approval, etc.

But if you agree with it it won't do that, so it isn't simply a case of randomly rerunning prompts.