logoalt Hacker News

Appearing productive in the workplace

1246 pointsby diebillionairesyesterday at 4:18 PM485 commentsview on HN

Comments

lqettoday at 11:04 AM

> The first is when novices in a field are able to produce work that resembles what their seniors produce [...]. > The second is when people generate artifacts in disciplines they were never trained in.

There is a third shape. Experts who have become so reliant / accustomed to AI that it dilutes their previously sharp judgment and, importantly, taste. I am seeing more and more work produced by experts which seems strangely out of character. A needlessly verbose text written by someone who was previously allergic to verbosity. An over-engineered solution (complete with CLI, storage backend, documentation, unit tests) for a trivial problem which that person would've solved by an elegant bash one-liner only 3 years ago. The work itself is always completely immune to any rational criticism, as it checks all the boxes: extensive documentation, scalable, high test coverage, perfect code style, and for texts perfect grammar, non-offensive, seemingly objective. But, for lack of a better word, it simply lacks taste.

show 2 replies
wcfrobertyesterday at 6:23 PM

> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."

Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

So now the "productivity-gain bottleneck" is people who still care enough to review manually.

show 13 replies
TrackerFFtoday at 7:37 AM

I watched a video of some (unemployed) programmer lamenting over the current job situation market. He had been coding for a good while, but had recently been laid off. The vid was mainly concerning the searching and interview process, but it also did highlight something I find somewhat true and important:

Right now we're in a gold rush. Companies, that be established ones or startups, are in a frenzy to transform or launch AI-first products.

You are not rewarded for building extremely robust and fast systems - the goal right now is to essentially build ETL and data piping systems as fast as humanly (or inhumanly) possible, and being able to add as many features as possible. The quality of the software is of less importance.

And, yes, senior engineers with other priorities are being overshadowed - even left in the dust - if they don't use tools to enhance their speed. As the article states, there are novice coders, even non-coders that are pushing out features like you wouldn't believe it. As long as these yield the right output, and don't crash the systems, that's a gold star.

Of course there are still many companies whose products do not fall under that, and very much rely on robust engineering - but at least in the startup space there's overwhelmingly many whose product is to gather data (external, internal), add agents, and do some action for the client.

You need extremely competent, and critically thinking technical leaders on the top to tackle this problem. But we're also in the age where people with somewhat limited technical experience are becoming CTOs or highly-ranked technical workers in an org, for no other reason than that they know how to use modern AI systems, and likely have a recent history of being extremely productive.

show 5 replies
Animatsyesterday at 10:09 PM

The OP has an amusing side point - LLMs have automated sucking up to management. There is a large market for that.

His main point, though, is this:

I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.

I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.

It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.

This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.

show 1 reply
proofofcontemptyesterday at 5:51 PM

What is described here closely resembles my experience too.

My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.

After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.

The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.

show 12 replies
2001zhaozhaotoday at 6:28 AM

Finally someone who nailed this problem. In the age of AI you need smart people who are aligned with the organization more than ever.

If people aren't aligned with the organization then bad, BAD things happen when the political people get access to AI and there's basically nothing you can do about it. They can use AI to fake things for a very extended time, then always find the most optimal way to cover up the problem before the consequences surface and at that point they've already moved so far up the ladder that the consequences don't matter to them anymore. IMO I think it's actively unsolvable in any org that is already deeply infested with politics.

On the other hand, having really smart people has massively increased in value. The only way to surface them is through naturally selecting on actual merit which only an entrepreneurship environment can reliably provide.

All of this means that I think startups with star teams are going to absolutely dominate for a few years (as in not just executing faster but with less bandwidth, but literally outright winning in everything) until near-full AI automation starts making the big firms win again simply by virtue of throwing tokens at the problem.

show 1 reply
oxag3nyesterday at 6:35 PM

Software Engineering seems to be quite unique to enable this due to few factors:

* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.

* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M

* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.

Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.

danawyesterday at 8:29 PM

i have a strong suspicion that the most productive software teams that leverage llms to build quality software will use it for the following:

- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm

- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity

- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to

- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step

- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution

these uses accelerate development while still putting the onus on the developers to know what they're building and why.

related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.

show 8 replies
nlawalkeryesterday at 5:27 PM

>People who cannot write code are building software. People who have never designed a data system are designing data systems. Most of it is not shipped; it is built, often for many hours, possibly shown internally with great vigor, used quietly, and occasionally surfaced to a client without much fanfare.

This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."

[1] https://news.ycombinator.com/item?id=42111031

show 3 replies
northernsausagetoday at 7:31 AM

My line manager using a lazy single line description of a product is generating whole product listings and HTML for our web shop, never checking it. SEO is poor, views and conversion are collapsing. Upper management is responding to my serious issues with ChatGPT bullet point lists that don't address the problem. Video conferences I can see people typing into and reading back GPT instructions, suppliers are sending AI generated product images. 3rd party site devs are running buggy site deployments with Claude Code written as co author. I can't take it anymore, its an office of zombies.

show 2 replies
grvdrmyesterday at 9:52 PM

> The cost of producing a document has fallen to nearly zero; the cost of reading one has not, and is in fact rising, because the reader must now sift the synthetic context for whatever the document was originally about. Each individual decision to elongate seems rational, and each is independently rewarded — readers are more confident in longer AI-generated explanations whether or not the explanations are correct [5]. The collective effect is that the signal in any given workplace is harder to find than it was before any of this began. The checkpoints have been hidden, drowned in their own paperwork, even when the people drowning them were genuinely trying to “be brief”

I just finished working with a client that is producing documents as described in this quote. The first time I recognized it was when someone sent me a 13-page doc about a process and vendor when I needed a paragraph at most. In an instant, my trust in that person dropped to almost zero. It was hard to move past a blatant asymmetry in how we perceived each other’s time and desire to think and then write concise words.

AbbeFariatoday at 11:17 AM

> because the reader must now sift the synthetic context for whatever the document was originally about

> time wasted using AI on tasks that did not need it, on artifacts no one will read, on processes that exist only because the tool made it cheap to construct them. On decks that spell out things that previously didn’t even need to be said or were assumed.

I work at MSFT and at-least in my org, this is happening at warp speed. Every document I read, my first thoughts are what is the kernel of the idea that the writer was trying to convey ? Because 95% of the content of the doc is just verbiage. You can always tell its verbiage, the em-dashes, the rhythmic text, the green check mark emoji etc. We are hoping that volume of output will make up for the quality or lack thereof. More markdown files, more AGENTS.md file but is that making us better developers ? It certainly is giving the illusion that we are faster but I don't know how management thinks this will lead to tangible impact on the top line or bottom line.

In my experience, some of the best writing (in design docs and PM specs) at MSFT have been human written. You can see the clarity of purpose from the writer, ithere is no need to read it again, it is equivalent to having a 1-on-1 with the writer themselves. But AI written slop, the less said the better.

This piece hits home, I wonder how the experience is at other Big Tech companies.

john_strinlaiyesterday at 5:06 PM

>I sat with it for a while, weighing whether to debate someone who was visibly copy-pasting verbatim from a model.

i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.

show 2 replies
wartywhoa23today at 11:05 AM

The humankind is just ankle deep into AI yet, but there's already such a huge potential for that Butlerian jihad to become true one day.

groostoday at 12:47 AM

During the last few months when AI usage was mandated in our team and usage exploded, our team's throughput has barely changed. Now, if this was due to people working 2 hours a day and painting, cooking and playing golf the rest of the day, this would be a great result, but I see many people work past 6pm, and yet the output is mostly the same. We are not tackling harder problems or fixing more bugs despite authoring numerous skills for AI. Eventually the reckoning is sure to come, and I think it will not be pretty.

vachinayesterday at 5:44 PM

> Never ask a model for confirmation; the tool agrees with everyone.

Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.

Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.

show 1 reply
futureproofdyesterday at 8:22 PM

I've noticed early into AI adoption in the workplace that some colleagues took advantage of the technology by appearing to be hyper-proactive; New TODs weekly, fresh new refactoring ideas, novel ways to solve age-old problems with shiny new algorithms. Fast-forward to today, and this is occurring two-fold. Not only are they trying to appear more proactive, combining this with the fear of AI layoffs, they're creating solutions to problems before the problem has even been fully defined.

For example, I was tasked to look into a company-wide solution for a particular architectural problem. I thought delivering a sound solution would give me some kudos, alas, I wasn't fast enough. An intern had already figured it out and wrote a TOD. I find myself too tired to compete.

Majikujanischtoday at 10:47 AM

Someone i know found a Wonderful word for this: Inkompetenzkompensierungskompetenz This is German and basically just means that someone has the competency to compensate for their own incompetency's, just that AI now does that for us and we slowly notice how important that even is in the day to day life.

show 1 reply
tedgghtoday at 7:27 AM

I have to produce a great deal of documentation at work for our customers, most of it regulatory and compliance assessments.

Some of the sources I need to use come from agencies in the government or working with the government and are often over a thousand pages long.

So AI has been incredibly helpful here because a lot of what I need to do is map this huge bureaucratic set of guidelines and policies to each customer’s particular situation.

Aware of the sloppy nature of LLMs I created my own workflow that resembles more coding than document drafting.

I use Codex, VSCode and plain markdown, I don’t use MS Word or Copilot like all my other colleagues.

I invest a great deal of time still doing manual labor like researching and selecting my sources, which I then make available for Codex to use as its single source of truth.

I start with a skill that generates the outline which often is longer than it should be. Sometimes I get say a 18 sections outline and I ask Codex to cut it in half. Then I ask for a preliminary draft of each section (each on a separate markdown) and read through and update as necessary, before I ask the agent to develop each section in full, then proof read and update again.

When I’m satisfied I merge all the sections into one single markdown and run another skill to check for repetition, ambiguity, length, etc and usually a few legitimate improvements are recommended.

The whole process can still take me several days to produce a 20-30 pages compliance document, which gets read, verified and approved by myself and others in my team before it goes out.

The productivity gains are pretty obvious, but most importantly I think the content is of better quality for the customer.

wazHFsRytoday at 5:12 AM

As everyone is an expert now[1] on paper I think it is unfortunately time for a shift again. For years I was big proponent of asynchronous remote work. But it seems like the only reasonable way forward is to discuss things face to face. I still prefer to prepare things async, but then discuss them in person to understand if people actually understand what they are talking about. So far I also have a good time with really being frank and honest with colleagues if something is clearly AI expertise and not that persons expertise.

1: https://www.dev-log.me/everyone_is_an_expert_now/

show 2 replies
auggierosetoday at 9:04 AM

The right use of AI requires stellar leadership, and to be honest, I don't think that kind of leadership exists. I am using AI just for myself, and the traps and pitfalls I encounter are so many. For example, I generate an article on a topic, and while this is very useful to get started, I then have to go through every sentence because AI makes some overconfident statements that are just not true in this form. This is still very helpful, because then I have to think about why they are not true. But I don't see how that can ever scale, how would I know that colleagues are also diligent like this?

AI is incredible in three scenarios: a) what I just described, to get you started, b) to generate artifacts that can be rigorously checked (and I don't mean tests, I mean proofs), c) where your artifacts don't have a meaningful notion of correctness, like a work of art.

c) is a matter of taste, b) certainly scales, but a) is where I think trust will be essential, and I am not ready to trust anyone with that except myself.

Oh, and I think currently, c) is applied to software engineering, by people who cannot distinguish the engineering from the art part of software. Which is just funny right now, and will eventually be catastrophic.

show 1 reply
jwrtoday at 10:26 AM

I find the "em dashes mean AI" trope annoying — I've been typing em dashes since I learned how to do this on a Mac, which was around 2007 I think. Shift-Option-hyphen became second nature, just like Option-; for an ellipsis (…). It's just how I write. Two hyphens now seem outright barbaric.

show 1 reply
jdw64yesterday at 5:16 PM

After reading this article, I can definitely feel how productivity rises inside organizations.

More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.

The argument is repetitive:

1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.

The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.

The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.

The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.

There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.

Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.

show 2 replies
ChrisMarshallNYyesterday at 6:50 PM

I spent most of yesterday, deleting and replacing a bunch of code that was generated by an LLM. For the most part, the LLM's assistance has been great.

For the most part.

In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.

My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.

For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.

The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with shiny tech.

Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.

Lesson learned: Caveat Emptor.

randusernameyesterday at 6:57 PM

> The cost of producing a document has fallen to nearly zero; the cost of reading one has not, and is in fact rising, because the reader must now sift the synthetic context for whatever the document was originally about.

This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.

show 1 reply
sublimefiretoday at 8:59 AM

One important part is not expanded on - incentives. If you really think about it that is the crux of the problem. If I am recognized for creating documents, PRs, features, decks, token use, and NOT for doc/PR/deck reviews or feedback or fixing features, then the outcome is what we see now.

An example of a new feature in the company goes the following way:

- some request is raised by person1

- PR is generated with an "agent" by person2

- PR is reviewed using an "agent" by person3

- feature is merged and shipped

- person1 is happy and records a video with a feature to be shown to the clients

- in a next call with the leadership this feature is declared as a success

It all looks good until you look at the implementation, not only that there is very little time to intervene. I find myself recently trying to quickly review PRs before they get quickly merged, just to be on a safe side as people do not even look at the code.

show 1 reply
Art9681today at 4:03 AM

Counterpoint: If humans shipped perfect products they would no longer havejobs. The majority of time spent in an organization is fixing problems humans caused. For good reasons and bad excuses. We are not machines.

What we, collectively as a species are building now with AI is a mirror that reflects the failures and successes we contributed to.

No engineer here has a perfect record. No senior or principal either. We make a ton of mistakes that are rarely written about.

This is an opportunity for the ones that assume they have mastered the craft to put up or shut up. Anyone can write a blog with or without AI.

Put your skills to work and implement the system that solves the problem you lament. Otherwise, get off my lawn.

Its another voice screaming into the void without offering a solution. The solution is not to build a faster horse. It is not to reminisce about the past. That ship sailed.

Fix the problem. It's the 100th blog repeating the same thing we've read for two years. Nothing was accomplished here except wasting time on the obvious to pat yourself on the back.

A lot of time is being wasted writing blogs raising red flags.

That's the easy part.

show 3 replies
joriswtoday at 9:15 AM

Having trouble understanding the final line:

> Also, those that claimed this article is ironically a casualty of it’s own complaint are 100% right, Kudos.

Why would the article be a casualty of its own complaint?

show 1 reply
drowntogeyesterday at 6:18 PM

"Output-competence decoupling" is my new favorite keyword.

gehstytoday at 8:55 AM

Where does it end, I don’t see people using AI less as time moves on.

I’ve not seen a cohesive statement on what the world looks like when LLMs can do work perfectly (which on a long enough timeline is coming).

Do Google/ Anthropic / OpenAI capture all value, do clients still want consultancies, if the client wants something that a human would use to do something does that project hold any value in an LLM dominant world, why even bother.

bambaxyesterday at 6:11 PM

I intensely agree with everything that's being said in TFA; this however could be nuanced:

> Never ask a model for confirmation; the tool agrees with everyone

If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.

show 4 replies
bjorneviktoday at 8:43 AM

> 'In many of the rooms I now find myself in, expertise has been asked to look the other way: to deliver faster, produce more, integrate the tools more deeply, get out of the way of the colleagues who are “getting things done”.'

The entire article resonates, but that particular passage get at the core of a lot of my current frustrations around the use of these systems. Great article!

mikeshi42today at 5:39 AM

While I agree with some of these observations - the research cited in the article really do not match the claims at all from what I can tell.

> An NBER study of support agents [2] found generative AI boosted novice productivity by about a third while barely helping experts. Harvard Business School researchers found the same pattern in consulting work [3].

The first work cited was a research study on GPT-3(!) from 2020. Which is a barely coherent model relative to today's SOTA.

The second HBS research study literally finds the opposite of what's claimed:

> we observed performance enhancements in the experimental task for both groups when leveraging GPT-4. Note that the top-half-skill performers also received a significant boost, although not as much as the bottom-half-skill performers.

Where bottom-half skilled participants with AI outperformed top-half skilled participants without AI. (And top-half skilled participants gained another 11% improvement when pared with AI). Again, GPT-4 model intelligence (3 years ago) is a far cry from frontier models today.

giantg2yesterday at 6:29 PM

The most productive people seem to be the ones who are skeptical of AI but found compelling cases to use them for and aren't afraid to correct them.

show 1 reply
Weryjyesterday at 8:03 PM

I brought this up during our AI workshops, but I called it the “confident idiot”

Seeing the idea explored in such depth is great, I really am concerned about this.

longevitygreenltoday at 10:24 AM

As someone who's been an engineer for 36 years and is now solo, you have a genuinely interesting perspective on performative productivity vs. actual output.

xyprototoday at 8:20 AM

I believe that the assumption that customers reviewing the output artifacts is "the final boss" is wrong. If AI use spreads, customers are also likely to use AI to review the artifacts. Vision, taste and curation remains, though.

juancnyesterday at 5:28 PM

AI can be (and often is) a confident incompetence amplifier.

nomilktoday at 1:25 AM

> He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked.

Solution: managers need to ask 'how does $THING_YOU_MADE actually work?'.

Pre-AI, it could be taken for granted that if someone was skilled enough to write complex code/documentation then they have a sound understanding of how it works. But that's no longer true. It only takes 5 minutes of questioning to figure out if they know their stuff or not. It's just that managers aren't asking (or perhaps aren't skilled enough to judge the answers).

On the issue of over-enthusiasm from upper management, this may be only temporary since it makes sense to try lots of new ideas (even the crazy ones) at the start of a technological revolution. After a while it will become clearer where the gains are and the wasteful ideas will be nixed.

show 1 reply
CWwdcdk7htoday at 9:55 AM

So the opposite of quiet quitting is loud slopping?

MattyRadyesterday at 10:17 PM

I think the author is describing the new incarnation of the Death March. In the Death March, contributors know that an active project will be dead-on-arrival, or cannot be redeemed. Maybe a small difference here being that the AI-equipped contributors won't be aware of the project status (i.e. futile).

Maybe this means AI has democratized Death Marches.

MoonWalkyesterday at 11:22 PM

The article presents a pretty good rundown on the state of affairs.

"A growing body of work calls this output-competence decoupling"

Given that I don't think he meant that there's a thing called "output competence," I think he meant "output/competence decoupling."

darepublicyesterday at 5:57 PM

I was tasked with coming up with a solution in 5 weeks which took another firm six months to produce. Never used agentic coding so much before or knew my code less well. Requirements are garbage though ,vague and just "copy what these other guys did, but better". I tried for. Couple of the weeks to get better specs but eventually gave up and just started building stuff to present.

JohnMakinyesterday at 9:31 PM

The “not helping experts” thing is a bit myopic. Everyone, no matter what a rockstar you are, has weak areas or areas of tedium that can be automated. For me, and it’s hindered me in my career in the past, was organizing a lot of tasks at once, communicating changes effectively across orgs (eg through jira), documentation, ticket management - this is a non concern now and the efficiency gain there has been incredible. The core things I do well, yea, it doesnt help a ton with other than can type way faster than I can (which is still really good).

If I’m having it do stuff I’m unfamiliar with, it does tend to do better than I would or steer me at least in a direction I can be more informed about making decisions.

monkeydusttoday at 7:47 AM

If people were incentivized to solve problems with least amount of token spend that would help.

mattastoday at 1:21 AM

This is what makes measuring productivity so hard. Let's say you're a worker that is responsible for updating a status of an order with a bunch of metadata.

One day, 100 orders come in for you to update. The next day, you get 50 orders to update. Did your productivity just get cut in half? If you get 200 orders on the third day, did you just quadruple your productivity from the previous day?

skortoday at 7:18 AM

yes, imho part of the problem of vibe coders is that training data is full of low quality advice/code, and it seems to me you won’t ever get rid of it. A perfect feedback loop to clean training data from bad advice/code without massive human intervention seems impossible as well.

groostoday at 12:42 AM

As I am continually amazed at how well Claude 4.7 deals with highly complicated C++ code, I am also becoming painfully aware of the developing situation mentioned in this article: I no longer completely understand the code it is editing, not because I'm incapable of doing it, but because I have not authored the changes. I am trading throughput for understanding, and, eventually, judgment.

bodegajedtoday at 3:01 AM

Great article. If the author is browsing HN please hear me out. They say the pen is mightier than the sword. However the reason on why is not clear but I believe that because it can change minds. This article after re-reading possible changed my mind to abandon agentic coding!

guizadillasyesterday at 4:53 PM

Sidenote: why is the post dated in the future? (May 28, 2026)

show 2 replies

🔗 View 43 more comments