logoalt Hacker News

rgloveryesterday at 4:18 PM30 repliesview on HN

A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

You can build things this way, and they may work for a time, but you don't know what you don't know (and experience teaches you that you only find most stuff by building/struggling; not sipping a soda while the AI blurts out potentially secure/stable code).

The hubris around AI is going to be hard to watch unwind. What the moment is I can't predict (nor do I care to), but there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign.

Good time to be in business if you can see through the bs and understand how these systems actually function (hint: you won't have much competition soon as most people won't care until it's too late and will "price themselves out of the market").


Replies

mark242yesterday at 5:19 PM

I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

Then, as part of the session, you would artificially introduce a bug into the system, then run into the bug in your browser. You'd see the failure happen in browser, and looking at Cloudwatch logs you'd see the error get logged.

Two minutes later, the SRE agents had the bug fixed and ready to be merged.

"understand how these systems actually function" isn't incompatible with "I didn't write most of this code". Unless you are only ever a single engineer, your career is filled with "I need to debug code I didn't write". What we have seen over the past few months is a gigantic leap in output quality, such that re-prompting happens less and less. Additionally, "after you've written this, document the logic within this markdown file" is extremely useful for your own reference and for future LLM sessions.

AWS is making a huge, huge bet on this being the future of software engineering, and even though they have their weird AWS-ish lock-in for some of the LLM-adjacent practices, it is an extremely compelling vision, and as these nondeterministic tools get more deterministic supporting functions to help their work, the quality is going to approach and probably exceed human coding quality.

show 7 replies
geophileyesterday at 4:52 PM

The article gets at this briefly and moves on: "I can do all of this with the experience on my back of having laid the bricks, spread the mortar, cut and sewn for twenty years. If I don’t like something, I can go in, understand it and fix it as I please, instructing once and for all my setup to do what I want next time."

I think this dynamic applies to any use of AI, or indeed, any form of outsourcing. You can outsource a task effectively if you understand the complete task and its implementation very deeply. But if you don't, then you don't know if what you are getting back is correct, maintainable, scalable.

show 4 replies
thephyberyesterday at 10:02 PM

This sounds entirely too doomer.

There will obviously be companies that build a vibe coded app which too many people depend on. There will be some iteration (maybe feature addition, maybe bug fix) which will cause a catastrophic breakage and users will know.

But there will also be companies who add a better mix of incantations to the prompts, who use version control and CI, who ensure the code is matched with tests, who maintain the prompts and requirements documents.

The former will likely follow your projected path. The latter will do fine and may even thrive better than either traditional software houses of cheap vibe coding shops.

Then again, there are famous instances of companies who have tolerated terribly low investment in IT, including SouthWest Airlines.

show 1 reply
manuelabeledoyesterday at 7:32 PM

There are people out there who truly believe that they can outsource the building of highly complex systems by politely asking a machine, and ultimately will end up tasking the same machine to tell them how these systems should be built.

Now, if I were in business with any of these people, why would I be paying them hundreds of thousands, plus the hundreds of thousands in LLM subscriptions they need to barely function, when they cannot produce a single valuable thought?

FeteCommunisteyesterday at 4:43 PM

I don't think there's going to be any catastrophic collapse but I predict de-slopping will grow to occupy more and more developer time.

Who knows, maybe soon enough we'll have specially trained de-slopper bots, too.

show 2 replies
giancarlostoroyesterday at 4:50 PM

I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.

I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.

show 1 reply
killerstormyesterday at 9:40 PM

Software engineers have been confidently wrong about a lot of things.

E.g. OOP and "patterns" in 90s. What was the last time you implemented a "visitor"?

P. Norvig mentioned most of the patterns are transparent in Common Lisp: e.g. you can just use a `lambda` instead of "visitor". But OOP people kept doing class diagrams for a simple map or fold-like operation.

AI producing a flawed code and "not understanding" are completely different issues. Yes, AI can make mistakes, we know. But are you certain your understanding is really superior?

drcodeyesterday at 5:14 PM

I'm no fan of AI in terms of its long term consequences, but being able to "just do things" with the aid of AI tools, diving head first into the most difficult programming projects, is going to improve the human programming skills worldwide to levels never before imaginable

show 2 replies
strayduskyesterday at 5:50 PM

Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?

show 2 replies
divbzeroyesterday at 7:01 PM

An HN post earlier this week declared that “AI is killing B2B SaaS”:

https://news.ycombinator.com/item?id=46888441

Developers and businesses with that attitude could experience a similarly rude awakening.

show 1 reply
kaydubyesterday at 7:11 PM

The hubris is with the devs that think like you actually.

show 1 reply
tenthirtyamyesterday at 6:05 PM

My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."

Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).

I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.

show 1 reply
harrisiyesterday at 6:26 PM

The aspect of "potentially secure/stable code" is very interesting to me. There's an enormous amount of code that aren't secure or stable already (I'd argue virtually all of the code in existence).

This has already been a problem. There's no real ramifications for it. Even for something like Cloudflare stopping a significant amount of Internet traffic for any amount of time is not (as far as I know) investigated in an independent way. There's nobody that is potentially facing charges. However, with other civil engineering endeavors, there absolutely is. Regular checks, government agencies to audit systems, penalties for causing harm, etc. are expected in those areas.

LLM-generated code is the continuation of the bastardization of software "engineering." Now the situation is not only that nobody is accountable, but a black box cluster of computers is not even reasonably accountable. If someone makes a tragic mistake today, it can be understood who caused it. If "Cloudflare2" comes about which is all (or significantly) generated, whoever is in charge can just throw their hands up and say "hey, I don't know why it did this, and the people that made the system that made this mistake don't know why it did this." It has been and will continue to be very concerning.

show 1 reply
bthornburyyesterday at 5:48 PM

Why does there seem to be such a divide in opinions on AI in coding? Meanwhile those who "get it" have been improving their productivity for literally years now.

show 3 replies
karmasimidayesterday at 7:40 PM

I give a year, the realization would be brutal.

MrDarcyyesterday at 4:57 PM

This comment ignores the key insight of the article. Design is what matters most now. Design is the difference between vibe coding and software engineering.

Given a good design, software engineers today are 100x more productive. What they produce is high quality due to the design. Production is fast and cheap due to the agents.

You are correct, there will be a reckoning for large scale systems which are vibe coded. They author is also correct, well designed systems no longer need frameworks or vendors, and they are unlikely to fail because they were well designed from the start.

show 1 reply
igleriayesterday at 8:34 PM

> A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

I pray (?) for times like the ones you predict. But companies can stay irrational longer than the average employee can afford.

bdcravensyesterday at 4:38 PM

You still "find most stuff by building/struggling". You just move up stack.

> there will be a shift when all of these vibe code only folks get cooked in a way that's closer to existential than benign

For those who are "vibe code only", perhaps. But it's no different than the "coding bootcamp only" developers who never really learned to think holistically. Or the folks who learned the bare minimum to get those sweet dotcom boom dollars back in the day, and then had to return to selling cars when it call came crashing down.

The winners have been, and will always be, those who can think bigger. The ones today who already know how to build from scratch but then find the superpower is in architecture, not syntax, and suddenly find themselves 10x more productive.

aogailiyesterday at 7:13 PM

What's makes you so sure of your statement?

I have be building systems for 20 years and I think the author is right.

farseeryesterday at 8:19 PM

I think it would be the opposite and we are all in for a rude awakening. If you have tried playing with Opus 4.6 you would know what I am talking about.

show 1 reply
bodge5000yesterday at 10:13 PM

> Good time to be in business if you can see through the bs and understand how these systems actually function

You missed out the most crucial and least likely requirement (assuming you're not self employed); management also need to be able to see through the bs.

wouldbecouldbeyesterday at 6:08 PM

Yeah I completely disagree with the author actually, but also with you.

The frameworks are what make the AI write easily understandable code. I let it run nextjs with an ORM, and it almost always creates very well defined api routes, classes & data models. etter then I would do often,

I also ask it be way more correct on the validation & error handling then I would ever do. It makes mistakes, I shout at it and corrects quickly.

So the project I've been "vibe coding" have a much better codebase then I used to have on my solo projects.

fennecbuttyesterday at 6:59 PM

Business has been operating on a management/executive culture for many decades now.

These people get paid millions a year to fly around and shake hands with people aka shit fuck all.

At times in the past I have worked on projects that were rushed out and didn't do a single thing that they were intended to do.

And you know what management's response was? They loved that shit. Ooooh it looks do good, that's so cool, well done. Management circle jerking each other, as if using everyone else's shafts as handles to climb the rungs of the ladder.

It's just...like it kills me that this thing I love, technology/engineering/programming...things that are responsible for many of the best things present in our modern lives, have both been twisted to create some of the worst things in our modern lives in the pursuit of profit. And the people in charge? They don't even care if it works or not, they just want that undeserved promotion for a job that a Simpsons-esque fucking drinking bird is capable of.

I just want to go back to the mid 2000s. ;~;

markus_zhangyesterday at 5:35 PM

But by then many of us are already starved. That’s why I always said that engineers should NOT integrate AI with internal data.

redleggedfrogyesterday at 5:15 PM

The future is already here. Been working a few years at a subsidiary of a large corporation where the entire hierarchy of companies is pushing AI hard, at different levels of complexity, from office work up through software development. Regular company meetings across companies and divisions to discuss methods and progress. Overall not a bad strategy and it's paying dividends.

A experiment was tried on a large and very intractable code-base of C++, Visual Basic, classic .asp, and SQL Server, with three different reporting systems attached to it. The reporting systems were crazy being controlled by giant XML files with complex namespaces and no-nos like the order of the nodes mattering. It had been maintained by offshore developers for maybe 10 years or more. The application was originally created over 25 years ago. They wanted to replace it with modern technology, but they estimated it'd take 7 years(!). So they just threw a team at it and said, "Just use prompts to AI and hand code minimally and see how far you get."

And they did wonderfully (and this is before the latest Claude improvements and agents) and they managed to create a minimal replacement in just two months (two or maybe three developers full time I think was the level of effort). This was touted at a meeting and given the approval for further development. At the meeting I specifically asked, "You only maintain this with prompts?" "Yes," they said, "we just iterate through repeated prompts to refine the code."

It has all mostly been abandoned a few months later. Parts of it are being reused, attempting a kind of "work in from the edges" approach to replacing parts of the system, but mostly it's dead.

We are yet to have a postmortem on this whole thing, but I've talked to the developers, and they essentially made a different intractable problem of repeated prompting breaking existing features when attempting to apply fixes or add features. And breaking in really subtle and hard to discern ways. The AI created unit tests didn't often find these bugs, either. They really tried a lot of angles trying to sort it out - complex .md files, breaking up the monolith to make the AI have less context to track, gross simplification of existing features, and so on. These are smarty-pants developers, too, people who know their stuff, got better than BS's, and they themselves were at first surprised at their success, then not so surprised later at the eventual result.

There was also a cost angle that became intractable. Coding like that was expensive. There was a lot of hand-wringing from managers over how much it was costing in "tokens" and whatever else. I pointed out if it's less cost than 7 years of development you're ahead of the game, which they pointed out it would be a cost spread over 7 years, not in 1 year. I'm not an accountant, but apparently that makes a difference.

I don't necessarily consider it a failed experiment, because we all learned a lot about how to better do our software development with AI. They swung for the fences but just got a double.

Of course this will all get better, but I wonder if it'll ever get there like we envision, with the Star Trek, "Computer, made me a sandwich," method of software development. The takeaway from all this is you still have to "know your code" for things that are non-trivial, and really, you can go a few steps above non-trivial. You can go a long way not looking to close at the LLM output, but there is a point at which it starts to be friction.

As a side note, not really related to the OP, but the UI cooked up by the LLMs was an interesting "card" looking kind of thing, actually pretty nice to look at and use. Then, when searching for a wiki for the Ball x Pit game, I noticed that some of the wikis very closely resembled the UI for the application. Now I see variations of it all over the internet. I wonder if the LLMs "converge" on a particular UI if not given specific instructions?

show 4 replies
cookiengineeryesterday at 5:55 PM

Come to the redteam / purpleteam side. We're having fun times right now. The definition of "every software has bugs" is now on a next level, because people don't even care about sql injection anymore. It's right built into every vibecoded codebase.

Authentication and authorization is as simple as POST /api/create/admin with zero checks. Pretty much every API ever slop coded looks like this. And if it doesn't, it will forget about security checks two prompts later and reverse the previously working checks.

nojitoyesterday at 4:38 PM

>A significant number of developers and businesses are going to have an absolutely brutal rude awakening in the not too distant future.

Correct. Those who wave away coding agents and refuse to engrain them into their workflows are going to be left behind in the dust.

show 3 replies
simianuuordsyesterday at 5:40 PM

[dead]

rugPoolyesterday at 6:12 PM

Back in the 00s people like you were saying "no one will put their private data in the cloud!"

"I am sick of articles about the cloud!"

"Anyone know of message boards where discussing cloud compute is banned?"

"Businesses will not trust the cloud!"

Aside from logistics of food and medicine, most economic activity is ephemeral wank.

It's memes. It's a myth. Allegory.

These systems are electrical state in machines and they can be optimized at the hardware layer.

Your Python or Ruby or whatever you ship 9,000 layers of state and abstraction above the OS running in the data center has little influence on how these systems actually function.

To borrow from poker; software engineers were being handed their hat years ago. It's already too late.