logoalt Hacker News

seamossfettoday at 6:01 PM61 repliesview on HN

I find most developers fall into one of two camps:

1. You treat your code as a means to an end to make a product for a user.

2. You treat the code itself as your craft, with the product being a vector for your craft.

The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

Personally, I fall into the first camp.

No one has ever made a purchasing decision based on how good your code is.

The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.

With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).

I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.


Replies

coffeefirsttoday at 7:43 PM

This is absolutely false. The purpose of craft is a make a good product.

I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.

Craft isn’t a fussy thing.

joe_the_usertoday at 7:57 PM

I was just using an app that competes with airbnb. That the app's code is extraordinarly unreliable was a significant factor in my interactions with others on the app, especially, I gradually realized I couldn't be sure messages were delivered or data was up-to-date.

That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.

So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.

Pxtltoday at 8:45 PM

As developers we have a unique advantage over everybody else dealing with the way AIgen is revolutionizing careers:

Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.

Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.

But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.

In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?

bigstrat2003today at 7:23 PM

> No one has ever made a purchasing decision based on how good your code is. The general public does not care about anything other than the capabilities and limitations of your product.

The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.

show 1 reply
imirictoday at 6:58 PM

While creating good software is as much of an art as it is a science, this is not why the craft is important. It is because people who pay attention to detail and put care into their work undoubtedly create better products. This is true in all industries, not just in IT.

The question is how much does the market value this, and how much it should value it.

For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.

But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.

To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.

So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.

packetlosttoday at 6:10 PM

I agree on the software dev camps.

> The general public does not care about anything other than the capabilities and limitations of your product.

It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.

It's as it always has been, balancing quality and features is... well, a balance and matters.

show 1 reply
BoorishBearstoday at 9:25 PM

I think some people are misunderstanding your point.

Yes, some people left to their own devices would take twice as long to ship a product half as buggy only to find out the team that shipped early has taken a massive lead on distribution and now half the product needs to be reworked to catch up.

And some people left to their own devices will also ship a buggy mess way too early to a massive number of people and end up with zero traction or validation out of it, because the bugs weren't letting users properly experience the core experience.

So we've established no one is entirely right, no one is entirely wrong, it's ying/yang and really both sides should ideally exist in each developer in a dynamic balance that changes based on the situation.

-

But there's also a 3rd camp that's the intersection of these: You want to make products that are so good or so advanced *, that embracing the craft aspect of coding is inherent to actually achieving the goal.

That's a frontend where the actual product is well outside typical CRUD app forms + dashboard and you start getting into advanced WebGL work, or complex non-standard UI state that most LLMs start to choke on.

Or needing to do things quicker than the "default" (not even naive) approach allows for UX reasons. I ran into this using Needleman-Wunsch to identify UI elements on return visits to a site without an LLM request adding latency: to me that's the "crafty" part of engineering serving an actual user need. It's a completely different experience getting near instant feedback vs the default today of making another LLM request.

And it's this 3rd camp's feedback on LLM development that people in the 1st camp wrongly dismiss as being part the 2nd craft-maxxed group. For some usecases, slop is actually terminal.

Intentionally contrived example, but if you're building a Linear competitor and you vibecode a CRDT setup that works well enough, but has some core decisions that mean it'll never be fast enough to feel instant and frontend tricks are hiding that, but now users are moving faster than the data and creating conflicts with their own actions and...

You backed yourself into a wall that you don't discover until it's too late. It's only hypervigilance and strong taste/opinion at every layer of building that kind of product that works.

LLMs struggle with that kind of work right now and what's worrying is, the biggest flaw (a low floor in terms of output quality) doesn't seem to be improving. Opus 4.6 will still try to dynamically import random statements mid function. GPT 5.3 tried to satisfy a typechecker by writing a BFS across an untyped object instead of just updating the type definitions.

RL seems to be driving the floor lower actually as the failure modes become more and more unpredictable compared to even GPT 3.5 which would not even be "creative enough" to do some of these things. It feels like we need a bigger breakthrough than we've seen in the last 1-2 years to actually get to the point where it can do that "Type 3" work.

* good/advanced to enable product-led growth, not good/advanced for the sake of it

ModernMechtoday at 6:11 PM

> You treat your code as a means to an end to make a product for a user.

It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.

If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.

Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.

show 1 reply
slopinthebagtoday at 6:33 PM

This is just cope to avoid feeling any shame for shipping slop to users.

show 1 reply
throwaway613746today at 10:07 PM

[dead]