[flagged]
You don't have to trust it. You can review its output. Sure, that takes more effort than vibe coding, but it can very often be significantly less effort than writing the code yourself.
Also consider that "writing code" is only one thing you can do with it. I use it to help me track down bugs, plan features, verify algorithms that I've written, etc.
Many of us are literally being forced to use it at work by people who haven't written a line of code in years (VPs, directors, etc) and decided to play around with it during a weekend and blew their minds.
Sure, but we’re trying to have curious conversation here, whereas this is the kind of dismissive, even curmudgeonly comment we're hoping to avoid.
LLMs are tool-shaped objects: https://minutes.substack.com/p/tool-shaped-objects
Without adequate real-world feedback, the simulation starts to feel real: https://alvinpane.com/essays/when-the-simulation-starts-to-f...
I could say the same about every web app in the world... they fail every single day, in obvious, preventable ways. Don't look into the javascript console as you browse unless you want a horror show. Yet here we all are, using all these websites, depending on them in many cases for our livelihoods.
I don't trust it completely but I still use it. Trust but verify.
I've had some funny conversations -- Me:"Why did you choose to do X to solve the problem?" ... It:"Oh I should totally not have done that, I'll do Y instead".
But it's far from being so unreliable that it's not useful.
we worked with humans for decades and are used to 25x less reliability
OP isnt holding it right.
How would you trust autocomplete if it can get it wrong? A. you don't. Verify!
I've spent 30 years seeing the junk many human developers deliver, so I've had 30 years to figure out how we build systems around teams to make broken output coalesce into something reliable.
A lot of people just don't realise how bad the output of the average developer is, nor how many teams successfully ship with developers below average.
To me, that's a large part of why I'm happy to use LLMs extensively. Some things need smart developers. A whole lot of things can be solved with ceremony and guardrails around developers who'd struggle to reliably solve fizzbuzz without help.