I think it's obvious that they're not referring to the author or a specific person at all. They're talking about how the zeitgeist has changed. Look at Hacker News archives 3 or more years ago and it would be really hard to find anyone arguing that coding speed is not a bottleneck or that engineers need to spend more time in collaboration. You would find a lot of arguments that leaving engineers alone to code is the best thing a business can do and constant lambasting of meetings, documents, approvals, and other collaborative activities.
I think there are small pieces of truth on both sides of the argument, but I find the sudden change to claiming that coding speed doesn't matter to feel half-baked. Coding speed is part of building a product. Speeding it up does provide benefit. There's a lot of denial about this, but I think the denial is rooted in emotion more than logic right now.
Needing focus to think is not the same as needing focus to write code..
It can take a whole day to find 10 good lines to write.
Speeding it up provides benefit if speed was the bottleneck to begin with. As the author notes or hints at, faster code output leads to more features being delivered, more room for experimentation, etc. But that's not necessarily productivity, if the features offer no value, if the experiments end up on a shelf, if the maintenance burden and context becomes bigger than the organization can handle (even LLM-assisted).
I've done a lot of "rebuild" / "second system" projects and the recurring theme is that the new version does less than the original. I don't think that's entirely down to the reality of second systems, I think that's in part because software grows over time but developers / managers rarely remove functionality. A full rebuild allows product owners (usually different from the ones of the original software) to consider whether something is actually needed.
Maybe some have changed their views because circumstances radically changed?
You wouldn't find anyone saying typing speed is a problem, they wanted more time for thinking.
I don't think that this is very hypocritical on the part of the developer holding such views. Typing code has never been the bottleneck, building the mental model has. You need the mental model so you know how the domain and the actual model will interact, which is needed for pre-empting what tests you need, what QA you need to do, etc etc. and the limitations of the system. You can demo this out with a specification but all specifications eventually meet the domain head on, and often with catastrophic consequences, and you still need to do this sort of work anyway when writing the specification.
Fundamentally, LLM do not construct a consistent mental model of the codebase (this can be seen if you, uh, read LLM code,), and this is Bad for a lot of reasons. It's bad for long-term maintainability, it's bad for modelling this code accurately and it's behaviour as a system, it's bad for testing and verifying it, etc. Pretty much all of the tasks around program design require you to have that mental model.
You can absolutely get an LLM to show you a mental model of the code, but there is absolutely nothing that can 100% guarantee you that that's the thing it's using. Proof of this is to look at how they summarise documents, to look at how inaccurate a lot of documentation they generate is, and to look at how inaccurate a lot of their code summaries are. Those would be accurate if the LLM was forming a mental model while it worked. It's a program to statistically generate plausible text, the fact that we got the program to do more than that in the first place is very interesting and can imply a lot of things, but at the end of the day, whatever you ask for it, it will generate text. There is absolutely no guarantee around accuracy of that text and there effectively can never be.