How about we just let nature take its course and rely on developers' laziness, one of the virtues of a good programmer?
I go to ChatGPT for basically any annoying code snippet and even functions now. I'm done ever having to guess at map reduce syntax again, or trying to remember if slice mutates the target array.
I'm messing with with codex more and more. But I still don't trust it to design features for me. Maybe in 6 months, I will. Is it really that important to force developers NOW to get to a place they'll get to in a few months anyway, assuming the hype is real?
never seen people who are hellbent on shooting themselves in the foot.
A.I or more accurately LLMs are currently trained on shitty open source code.
the best practice code out there is locked in some cabinets for private companies.
if you insist on 100% A.I written code - then how are you gonna train the new generation to write software well.
you will come to the singular point - where the new generation knows nothing & the LLMs themselves can't be trained further (we are almost there btw).
LLMs as better autocomplete are perfect use case. or as a rubber duck that talks back in terms of debugging. anything else is frivolous.
I'm hearing the same conversation play out at small software companies right now. Engineers telling their managers: am I being forced to adopt this, or do I need to go somewhere else?
Wrote about why I think the job description already changed, and what I'd rather see teams do about it than have that exhausting conversation on repeat.
How about we just let people code how they want if the codebase doesn’t care how it gets written? If it doesn’t matter why must we use one particular tool versus another?
Amish craftsmen confronted with a nail gun and an impact driver. Yeah you can still build a house swinging your arm if you want, but others won't.
Once again, this whole article is predicated on us being at the finish line. You know who will care about how something got written? Very suddenly, it will be the org that has an issue that the AI fails to fix or you don't understand well enough to fix within a span of time they deem reasonable. That has been the battle since the beginning of software and the only thing you have to combat it is your understanding.
I am still baffled about engineer's or developer's use of AI. I see fail and fail on anything other than some novelty one-shotted tool, and even then folks are bending over backwards to find the value because that solution is always a shallow representation of what it needs to be.
My projects are humming away, hand made. They have bugs, but when they show up, I more often than not know exactly what to do. My counterparts have AI refactoring wild amount of code for issue that can be solved with a few lines of adjustment.
TL;DR I feel like I live in a different reality than those who write these blog
Wherein a podcaster and self promoter sells out open source as if he spoke for everyone.
Please like and subscribe!
I detest the way ChatGPT writes. You can tell immediately when someone had a rough draft or just an idea thrown into the ChatGPT filter. At least tell it to cut to these chase next time, nobody has time for fluff in this attention economy.
Not sure what to think about the first part, but what you said about writing style, I think it's still reasonable to judge developers/engineers by writing style:
- Writing style does reveal how people understand problems and their approach for solving them. People that prioritize direct solutions over complex abstractions are still valuable to catch over engineered code.
- People with "good taste" in code can catch when AI generated code takes shortcuts to accomplish a certain task, this happens every day and we can't ignore it.
The state of AI code can be way better by 6 months or 1 year, or even more (we don't really know), but we're not there yet, and we can't wait until there to hire new people without considering those points.