logoalt Hacker News

hansmayeryesterday at 9:47 PM1 replyview on HN

> Falcons certainly aren't deterministic.

Well falcons are not deterministic and are trained to do something in the art of falconry, yes. Still I fail to see an analogy here as it is the falcon gets trained to execute a few specific tasks triggered by specific commands. Much like a dog. The human more or less needs to remember those few commands. We don't teach dogs and falcons to do everything do we ? Although we do teach specific dogs do to specific tasks in various domains. But no one ever claimed Fido was superintelligent and that we needed to figure him out better.

> That's what makes them hard to use! A programming language has like ~30 keywords and does what you tell it to do. An LLM accepts input in 100+ human languages and, as you've already pointed out many times, responds in non-deterministic ways. That makes figuring out how to use them effectively really difficult.

Well yes and no. The problem with figuring out how to use them (LLMs) effectively is exactly caused by their inherent un-predictability, which is a feature of their architecture further exacerbated by whatever datasets they were trained on. And so since we have no f*ing clue as to what the glorified slot machines might pop out next, and it is not even sure as recently measured, that they make us more productive, the logical question is - why should we, as you propose in your latest blog, bend our minds to try and "figure them out" ? If they are un-predictable, that means effectively that we do not control them, so what good is our effort in "figuring them out"? How can you figure out a slot machine? And why the hell should we use it for anything else other than a shittier replacement for pre-2019 Google? In this state they are neither augmentation nor amplification. They are a drag on productivity and it shows, hint - AWS December outage. How is that amplifying anything other than toil and work for the humans?


Replies

simonwyesterday at 10:20 PM

I've found that using LLMs has had a very material effect on my productivity as a software developer. I write about them to help other people understand how I'm getting such great results and that this is a learnable skill that they can pick up.

I know about the METR paper that says people over-estimate the productivity gains. Taking that into account, I am still 100% certain that the productivity gains I'm seeing are real.

The other day I knocked out a custom macOS app for presenting web-pages-as-slides in Swift UI in 40 minutes, complete with a Tailscale-backed remote presenter control interface I could run from my phone. I've never touched Swift before. Nobody on earth will convince me that I could have done that without assistance from an LLM.

(And I'm sure you could say that's a bad example and a toy, but I've got several hundred more like that, many of which are useful, robust software I run in production.)

show 2 replies