"Don't worry, scrote. There are plenty of 'tards out there living really kick-ass lives. My first wife was 'tarded. She's a pilot now."
IYKYK
yes, imho part of the problem of vibe coders is that training data is full of low quality advice/code, and it seems to me you won’t ever get rid of it. A perfect feedback loop to clean training data from bad advice/code without massive human intervention seems impossible as well.
"AI speedtracks bullshit shops into bullshit factories" is the other side of "AI enables efficiency gains beyond immagination". As a freelancer I get to see both in action.
No surprise! Do you rememeber agile? Sometimes it was pragmatically applied towards efficiency, sometimes it became a bullshit religion full of priest and ceremonies. And on i could go, with more examples, the gist stays the same : new tools, speed increase, faster crash or faster travel depends on the trajectory the company/team/project/thing was already on.
A special note on "People who cannot write code are building software." "Fuck yeah" to that! Devs has shipped bad software to people in other departements/domains, for ages. They would never build something better if what they had was good in the first place.
When we (coders/startups) were doing it it was "innovation", now is "elephants in the china shop"? And this is not a rethorical snappy question: that IS innovation, instead of critizing the "wrong schema" ... understand the idea, help build it and do the job: ship code that works and is safe.
Also, grey-beard here, pls, don't think you can ever have a stable job especially when code is around. It keeps changing, it always has, it always will. AI bringing unprecedented changes is hype. The world always changed fast.
If "you" picked software development because of salary, you are in danger. If you did it because you love it, then tell me with a straight face this is not one of the best moments to be alive.
AI is another development that drives me absolutely mad. It's like jet fuel for people who leave a trail of technical debt for people who care more about that sort of thing to try to clean up.
AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.
It's not ai that scary it's people using its field they don't know and then defending wrong outputs like they built it themselves
Multiple times reading through this article I had a real physical feeling of my heart sinking because the situation described isn't only horrible it is absolutely real that I can totally relate to. Verbatim.
Here is a solution to this problem I think: make an LLM. Summarize everything. If there is fluff then it should get dropped? Basically we only care about the relevant information content, regardless of the number of characters used - so we need a compressed representation
Fuck, yes. This.
I work in an "AI-first" startup. Being "The Expert", my work has become 90% reviewing the tons of crap that confident BD people now produce, pretending to understand stuff that has never been their domain, proudly showing off their 20-pages hallucinated docs in the general chat as the achievement of their life.
"Heads up folks, I wrote this doc! @OP can you review for accuracy and tone pls?"
And don't hit me with the smartass "just say no", it's not an option. I tried that initially. I have a pretty senior position in the org, I complained to the CTO which I report to, and with the BD managers as well, that I do not have bandwidth to review AI-produced crap. After a couple of weeks, CEO and leadership in an org call spelling out loud that "we should collaborate and embrace AI in all our workflows, or we will be left behind". They even issued requirements to write a weekly report about "how AI improved my productivity at work this week". Luckily I am senior enough to afford ignoring these asks, but I feel bad to all my younger colleagues, which are basically forced to train their replacements. I am not even sure at this point whether this is all part of the nefarious corporate MBA "we can get finally rid of employees" wet dream, whether it's just virtue-signalling to investors, or if CEO and friends genuinely believe their own words. I have the feeling leadership (not only in my org) has gone in AI-autopilot mode and just disappeared to the sunny tropical beaches they always wanted to belong to.
I would happily find another workplace at this point, but you know how the market is right now, and anyway, I have the feeling that this shit is happening pretty much anywhere money is.
Everyone feels smart now, and it's a curse.
God, how I hate this. It's making my life miserable.
What credentials does this author have to cite social science research in their determination of the competency of other people? Their only other article is about eschewing native apps - why am I supposed to take their opinion about measuring competency seriously if they are a software engineer, not a psychologist? They are clearly outside of their domain of expertise and therefore incapable of producing work with any value whatsoever, according to their own arguments.
Instead of helping, the author fought against them, "from day one anyone could tell that the schemas were wrong", yet nobody helped him, and instead went to the vp and complained about them. sad. what a horrible place to work in
Dismissing this as just another anti-AI blog could appear a shallow dismissal, but in reality, it 8s mostly the pain of adapting to the change. The writer has certain framework of norms or world where good and bad are well defined, and that he knows what's desirable and what's not.
This is not new. This happened with every new technology or paradigm change. The old norms take a while to adapt to the new world and it involves some pain, emitting writings like this one.
Impersonation by using abilities that are not biologically their own, has been the strategy of dominance for human race. Horse-riding knights with bows and arrows dominated other humans that didn't have horse or arrows.
What are you complaining about? Quality of the software produced? Quality of objectives? Here is the truth. None of that is the root goal. You need to change your assumptions and norms and root goals.
And the added horror of prs that keep on coming. Correct looking code with no thoughts behind it.
Damn, I came here for practical advice
The cope-ism in this blog post is palpable. The author is genuinely offended that someone who doesn't know how to code is daring to invade his turf. It's pretty sad that this is how he is reacting.
I, for one, welcome the new paradigm shift of vibe coders entering the field. I still think I have a competitive advantage with my 30+ years of coding experience, but I don't think it's wrong for vibe coders to enter my turf. I think value of code is rapidly asymptotically to ZERO. Code has no value anymore. It doesn't matter if it's slop as long as it works. If you are one of the ones that believes that all code written by humans is sacred and infallible, you probably don't have a lot of experience working in many companies. Most human code is garbage anyway. If it's AI-generated, at least it's based on better best principles and if it's really bad you just need to reprompt it or wait for a newer version of the AI and it will automatically get better.
THIS IS THE NEW PARADIGM. THINKING YOU HAVE ANY POWER TO SWAY THE FUTURE AWAY FROM THIS PATH IS FOOLISH.
I'm currently running a migration program at work and it turns out there's a 10 MB limit to the number of entries I can batch over at one time. At first I asked AI to copy 10 rows per batch but that was too slow. Then I asked it to change the code to do 400 rows per batch but sometimes it failed because it exceeded the 10 MB limit. Then I said just collect the number of rows until you get 10 MB and then send it off. This is working perfectly and now I'm running it without any hitches so far. Then I asked it to add an estimate to how long it would take to finish after every batch, including end time.
I really love this new world we're living in with AI coding. Sure this could have been done by someone without experience, but at least for right now the ideas I can come up with are much better than those without any experience, and that's hopefully the edge that keeps me employed. But whatever the new normal is, I'm ready to adapt.
Who cares? I obviously didn't like the article.
> Schemes were all wrong
Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.
I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.
It would be nice if someone invented a mouse with a tiny motor inside, so I could put on sunglasses, rest my hand on the mouse, doze off, and still look like I'm working hard.
It's incredibly humorous to watch companies take a gift horse and drown it for sport.
I've been offered a Book of Shadows for cryin' out loud.
Great article. Hits on many points that resonate with my experience.
The skin in the game one, in particular, is something I've been thinking about. People have been telling me LLMs are "more intelligent" than "average people". But it's easy to sound intelligent when you have no skin in the game. People have to stand by their word and suffer the consequences of their actions. It's not enough just to sound intelligent.
It seems appropriate also to share an anecdote of an incident that recently happened in my job. A colleague submitted some code for review, quite a lot of it. A second colleague reviewed and questioned a piece of code. Rather than answer the question with a justification, the question was taken rhetorically and the code was removed. The code then failed in production because the removed code was, in fact, necessary. The LLM obviously "knew" this, but neither colleague did. It's leading me to introduce a "no rhetorical questions in code review" rule. The submitter must be able to justify every line of code they submit.
Exactly what we see.
And the worst offenders are those insisting this isn't the case.
I think it's interesting that the data suggests that novices can increase productivity by a third and experts not at all. That sounds very similar to Dunning-Kruger- the novices literally don't know what productivity looks like.
I'm finding it difficult to agree on document creation now being zero cost whereas consumption is high cost. I think you can actually spend time giving AI enough context to consume docs for you.
I think the other thing worth pointing out with the article is understanding what your company will recognise. Yes, it's totally correct that your company won't thank you for poopoo-ing the idiot with AI. Yes, they'll run into a buzz saw when they hit a stakeholder who can choose to buy in. Don't burn your career protecting theirs. In fact it's not even certain that the idiot is damaging their career (for many reasons).
This was a really interesting article.
So essentially, AI is exacerbating the Dunning-Kruger effect in society.
Throughout my career many people have believed such bullshit illuminated their productivity. What has gotten me promoted in the past was doing the opposite, as in trying to not appear busy. If you have to justify your existence then your reason for existing is not well justified.
I think this is exciting. The market will do its job and crush the inefficient companies where management is unable to recognize the slop. People who produce value will produce more of it with AI, people who wasted resources will waste more of it with AI.
I’m certainly glad we have respected contributing members of our community named things like “diebillionaires”. What’s next, “killallkikes”? HN is an amazing place.
We have found the great filter, and it is LLMs.
Back around 2005, I worked with a guy who was trying to position himself as the go-to expert on the team. He'd always jump at the chance to explain things to QA and the support team. We'd occasionally hear follow-up questions from those teams and realize that he was just making things up.
He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.
Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.
He ended up promoted to management.
Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.
> Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries.
I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.
I don't know what the solution is here. :(
That perfectly describes my manager.
s/betray/portray/ ?
I had a feeling I wasn’t the only one witnessing this madness.
Well this unlocked a new fear, I can imagine all the similar “nests” of AI generated content out there being created right now, I am likely to have to untangle one some day, or at least break it to someone that it’s garbage, almost as if the AI itself has built a nest and is hoarding artifacts but it’s actually the human deciding to bundle up the slop and put a bow on it.
Excellent article! Aptly describes what I have been feeling and thinking about the claims many AI optimists make.
---
> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.
AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.
The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.
Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.
---
> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?
This is one much-needed point to raise.
I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?
I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.
Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.
Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.
(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)
---
> The slowness was not a tax on the real work; the slowness was the real work.
Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.
Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.
I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.
The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.
[flagged]
[dead]
[dead]
[dead]
[flagged]
i need karma
Increasingly, there is a disconnect between established operational/corporate systems and the new AI-enhanced powers of individual workers.
The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!
Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.
Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).
In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.
Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.
We were promised GlaDOS, and were given Wheatley.