> Most people suck at playing the piano. Most people suck at prompting coding agents. If you practice either of those things you'll get better at them.
It would be funny, if by now I weren't convinced you are pushing these false analogies on purpose. The key difference between a piano and LLMs being, the piano will produce the same sounds to a same sequence of keys. Every single time. A piano is deterministic. The LLMs are not, and you know it, which makes your constant comparison of deterministic with non-deterministic tools sound a bit dishonest. So please stop using these very weak analogies.
> I really don't understand the "stop telling me I'm holding it wrong" argument. You probably are holding it wrong!
Right, another weak argument. Writing English language paragraphs is not a science you seem to imply it is. You're not the only person using the LLMs intensively for the last years, and it's not like there this huge secret to using them - after all they use natural language as their primary interface. But that's besides the point. We're not discussing if they are hard or easy to use or whatever. We are discussing if I should replace the magnificent supercomputer already placed in my head by mother nature or God or Aliens or whatever you believe in, for a very shitty, downgraded version 0.0.1 of it sitting in someone's datacenter, all for the sake of sometimes cutting some corners by getting that quick awk/sed oneliner or some boilerplate code? I don't think that's a worthy tradeoff, especially when the relevant reports indicate an objective slowdown, which probably also explains the so-called LLM-fatigue.
> Is this born out of some weird belief that "AI" is meant to be science fiction technology that you don't ever need to learn how to use?
No, actually it is born out of the weird belief which your sponsors have been either explicitly or implicitly promoting, now for the 4th year, in various intensities and frequencies, that the LLM technology will be equal to a "country of PhDs in a datacenter". All of this based on the super weird transhumanist ideology a lot of the people directly or indirectly sponsoring your writing actively believe in. And whether you like it or not, even if you have never implied the same, you have been a useful helper by providing a more "rational" sounding voice, commenting on the supposed incremental improvements and progress and what not.
Fine, if you don't like the piano analogy:
Most people suck at falconry. If you practice at falconry you'll get better at it.
Falcons certainly aren't deterministic.
> it's not like there this huge secret to using them - after all they use natural language as their primary interface
That's what makes them hard to use! A programming language has like ~30 keywords and does what you tell it to do. An LLM accepts input in 100+ human languages and, as you've already pointed out many times, responds in non-deterministic ways. That makes figuring out how to use them effectively really difficult.
> We are discussing if I should replace the magnificent supercomputer already placed in my head by mother nature or God or Aliens or whatever you believe in, for a very shitty, downgraded version 0.0.1 of it sitting in someone's datacenter
We really aren't. I consistently argue for LLMs as tools that augment and amplify human expertise, not as tools that replace it.
I never repeat the "country of PhDs" stuff because I think it's over-hyped nonsense. I talk about what LLMs can actually do.