I do not think "AI coding" - as distinct from the human who drives it - is gambling. More like a delayed footgun for the uneducated. I don't mean that disparagingly, but I do mean it literally.
I’ve certainly been spending more time coding. But is it because it’s making me more efficient and smarter or is it because I’m just gambling on what I want to see?
Is this really a difficult question to answer for oneself? If you can't tell if you're learning anything, or getting more confident describing what you want, I would suggest that you cannot be thinking that deeply about the code you're producing. Am I just pulling the lever until I reach jackpot?
And even then, will you know you've won?At the very least, a gambler knows when they have hit jackpot. Here, you start off assuming you've won the jackpot every time, and maybe there'll be an unpleasant surprise down the line. Maybe that's still gambling, but it's pretty backwards.
An idea just occurred to me: why not tell AI to code in Coq? AFAIK the selling point of that language is that if it compiles, then it's guaranteed to work. It's just that it's PITA to write code in Coq, but AI won't get annoyed and quit.
I really hate when people write about the AI of the past, opus 4.6 and gpt 5.4 [not as much imo, it's really boring and uncreative] have increased in capabilities so much that it's honestly mind numbing compared to what we had LESS than a year ago.
Opus specifically from 4.1 to 4.5 was such a major leap that some take it for granted, it went from getting stuck in loops, generally getting lost constantly, needing so so much attention to keep it going to being able to get a prompt, understand it from minimal context and produce what you wanted it to do. Opus 4.6 was a slight downgrade since it has issues with respecting what the user has to say.
As always, scope the changes to no larger than you can verify. AI changes the scale, but not the strategy.
Now you have more resources to test, reduce permissions scope, to build a test bench & procedure. All of the excuses you once had for not doing the job right are now gone.
You can write 10k + lines of test code in a few minutes. What is the gamble? The old world was a bigger gamble.
So.
Is.
Life.
You've discovered probability, there was an 80% change of that. Roll a dice and do not pass go.
Again. The output from llm is a probable solution, not right, not wrong.
surprised this isn't talked about more
For me, the feedback loop accelerating the way that AI now permits is so addictive in my day-to-day flows. I've had a really hard time stepping away from work at a reasonable hour because I get dopamine hits seeing Claude build things so fast.
Addiction and recovery is part of my story, so I've done quite a bit of work around that part of my life. I don't gamble, but I can confidently say that using LLMs has been an incredible boost in my productivity while completely destroying my good habits around setting boundaries, not working until 2AM, etc.
In that sense, it feels very much like gambling.
it's gambling until you learn how to set up proper harnesses then it just becomes normal administration. It's no different than running a team, humans make mistakes too, that's why we have CI pipelines, automated testing etc... AI assisted coding "JUST" requires you to be extra good at that part of the job.
coding with an LLM works if the model you are following is: you have the role of architect and/or senior developer, and you have the smartest junior programmer in the world working for you. You watch everything it does, check its conclusions, challenge it, call it out on things it didnt get quite right
it's really extremely similar to working with a junior programmer
so in this post, where does this go wrong?
> I am not your average developer. I’ve never worked on large teams and I’ve barely started a project from scratch. The internet is filled with code and ideas, most of it freely available for you to fork and change.
Because this describes a cut-and-paster, not a software architect. Hence the LLM is a gambling machine for someone like this since they lack the wisdom to really know how to do things.
There's of course a huge issue which is that how are we going to get more senior/architect programmers in the pipeline if everyone junior is also doing everything with LLMs now. I can't answer that and this might be the asteroid that wipes out the dinosaurs....but in the meantime, if you DO know how to write from scratch and have some experience managing teams of programmers, the LLMs are super useful.
"60% of the time, it works every time"
Is using a calculator gambling?
...and the payouts are fantastic.
“hiring people is gambling”
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
h1b coding is ignorance.
I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.
Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.
This "slot machine" metaphor is played out. If you're just entering a coin's worth of information and nudging it over and over in the hopes of getting something good, that's a you problem, not a Claude problem.
If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.
Not only is it gambling, it has the full force of the industry that built the attention market behind it. I find it extremely hard to believe that these tools have not been optimised to keep developers prompting the same way tiktok keeps people scrolling.
When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.
I see whole teams pushed by c- level going full in with spec driven + tdd development. The devs hate it because they are literally forbidden to touch a single line if code. but the results speak for themselves, it just works and the pressure has shifted to the product people to keep up. The whole tooling to enable this had to be worked out first. All Cursor and extreme use of a tool called Speckit, connected to Notion to pump documentation and Jira.
> But this doesn't really resemble coding. An act that requires a lot of thinking and writing long detailed code.
Does it? It did in the past. Now it doesn't. Maybe "add a button to display a colour selector" really is the canonical way to code that feature, and the 100+ lines of generated code are just a machine language artifact like binary.
> But it robs me of the part that’s best for the soul. Figuring out how this works for me, finding the clever fix or conversion and getting it working. My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.
Skill issue. Two nights ago, I used Claude to write an iOS app to convert Live Photos into gifs. No other app does it well. I'm going to publish it as my first app. I wouldn't have bothered to do it without AI, and my soul feels a lot better with it.
haha.. I agree with the points mentioned in the article. Literally every model does this. It feels like this even with skills and other buzzword files