I write detailed specs. Multifile with example code. In markdown.
Then hand over to Claude Sonnet.
With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
So turns out that I'm not writing code but I'm reading lots of code.
The fact that I know first hand prior to Gen AI is that writing code is way easier. It is reading the code, understanding it and making a mental model that's way more labour intensive.
Therefore I need more time and effort with Gen AI than I needed before because I need to read a lot of code, understand it and ensure it adheres to what mental model I have.
Hence Gen AI at this price point which Anthropic offers is a net negative for me because I am not vibe coding, I'm building real software that real humans depend upon and my users deserve better attention and focus from me hence I'll be cancelling my subscription shortly.
Writing detailed specs and then giving them to an AI is not the optimal way to work with AI.
That's vibecoding with an extra documentation step.
Also, Sonnet is not the model you'd want to use if you want to minimize cleanup. Use the best available model at the time if you want to attempt this, but even those won't vibecode everything perfectly for you. This is the reality of AI, but at least try to use the right model for the job.
> Therefore I need more time and effort with Gen AI than I needed before
Stop trying to use it as all-or-nothing. You can still make the decisions, call the shots, write code where AI doesn't help and then use AI to speed up parts where it does help.
That's how most non-junior engineers settle into using AI.
Ignore all of the LinkedIn and social media hype about prompting apps into existence.
EDIT: Replaced a reference to Opus and GPT-5.5 with "best available model at the time" because it was drawing a lot of low-effort arguments
I must be doing something very different from everyone else, but I write what I want and how I want it and Opus 4.7 plans it for me, then I carefully review. Often times I need to validate and check things, sometimes I’ve revised the plan multiple times. Then implementation which I still use Opus for because I get a warning that my current model holds the cache so Sonnet shouldn’t implement. And honestly, I’m mostly within my Pro subscription, granted I also have ChatGPT Plus but I’ve mostly only used that as the chat/quick reference model. But yeah takes some time to read and understand everything, a lot of the time I make manual edits too.
>I write detailed specs. Multifile with example code. In markdown. Then hand over to Claude Sonnet. With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
Stop doing that. Micromanage it instead. Don't give it the specs for the system, design the system yourself (can use it for help doing that), inform it of the general design, but then give it tasks, ONE BY ONE, to do for fleshing it out. Approve each one, ask for corrections if needed, go to the next.
Still faster than writing each of those parts yourself (a few minutes instead of multiple hours), but much more accurate.
> With hard requirements listed, I found out that the generated code missed requirements,
This is hardly a surprise, no? No matter how much training we run, we are still producing a generative model. And a generative model doesn't understand your requirements and cross them off. It predicts the next most likely token from a given prompt. If the most statistically plausible way to finish a function looks like a version that ignores your third requirement, the model will happily follow through. There's really no rules in your requirements doc. They are just the conditional events X in a glorified P(Y|X). I'd venture to guess that sometimes missing a requirement may increase the probability of the generated tokens, so the model will happily allow the miss. Actually, "allow" is too strong a word. The model does not allow shit. It just generates.
I'm starting to think a lot of the problem people are having is just that they have unrealistic expectations.
I'm not having the same problem as you and I follow a very similar methodology. I'm producing code faster and at much higher quality with a significant reduction in strain on my wrists. I doubt I'm typing that much less, but what I am typing is prose which is much more compatible with a standard QWERTY keyboard.
I think part of it is that I'm not running forward as fast as I can and I keep scope constrained and focused. I'm using the AI as a tool to help me where it can, and using my brain and multiple decades of experience where it can't.
Maybe you're expecting too much and pushing it too hard/fast/prematurely?
I don't find the code that hard to read, but I'm also managing scope and working diligently on the plans to ensure it conforms to my goals and taste. A stream of small well defined and incremental changes is quite easy to evaluate. A stream of 10,000 line code dumps every day isn't.
I bet if you find that balance you will see value, but it might not be as fast as you want, just as fast as is viable which is likely still going to be faster than you doing it on your own.
>Then hand over to Claude Sonnet.
Have you tried Opus 4.6 with "/effort max" in Claude Code? That's pretty much all I use these days, and it is, honestly, doing a fantastic job. The code it's writing looks quite good to me. Doesn't seem to matter if it's greenfield or existing code.
If code is harder to read than to write, you're doing yourself a disservice by having the output stage not be top shelf.
I'm confused. If you have detailed, specific expectations, why aren't using the best model available? Even if you were using Opus 4.7, I would inquire if you're using high/xhigh effort by default.
Feels crazy to me for people to use anything other than the best available.
I have the same feeling.
Like there is no way in world that Gen AI is faster then an actual cracked coder shooting the exact bash/sql commands he needs to explore and writing a proper intent-communicating abstraction.
I’m thinking the difference is in order of magnitudes.
On top of that it adds context loss, risk of distraction, the extra work of reading after the job is done + you’ll have less of a mental model no matter how good you read, because active > passive.
Man it was really the weirdest thing that Claude Coded started hiding more and more changes. Thats what you need, staying closely on the loop.
I give Claude small incremental tasks to do and it usually does them flawlessly. I know how to design the software and break into incremental tasks. Claude does the work. The productivity increase has been incredible. I think I'll be able to bootstrap a single person lifestyle business just using Claude.
I don't know. I don't write detailed specs, but make it very iterative, with two sessions. One for coding and one for reviews at various levels.
Just the coding window makes mistakes, duplicates code, does not follow the patterns. The reviewer catches most of this, and the coder fixes them all after rationalizing them.
Works pretty well for me. This model is somewhat institutionalized in my company as well.
I use CC Opus 4.7 or Codex GPT 5.4 High (more and more codex off late).
This is how I feel with AI math proofs. I’m not sure where they’re at now, but a year ago it took so much more time to check if an LLM proof was technically correct even if hard to understand, compared to a well structured human proof.
Maybe it was Timothy Gowers who commented on this.
Lots of human proofs have the unfortunate “creative leap” that isn’t fully explained but with some detectable subtlety. LLMs end up making large leaps too, but too often the subtle ways mathematicians think and communicate is lost, and so the proof becomes so much more laborious to check.
Like you don’t always see how a mathematician came up with some move or object to “try”, and to an LLM it appears random large creative leaps are the way to write proofs.
Now that there is Claw Code[1], seems like many of these cancellations are easier to do.
I use open spec to negotiate requirements before the handoff, it's helped me a lot. You could also use GSD2 or Amazon's Kiro, or Spec Kit but I find they have too many stages and waste tokens.
And it leaves 25 TODO comments in code silently, reporting to you that everything is done.
This may be a bit silly but I do what you do and then I tell Claude to review the code it wrote and compare it to the specs. It will often find issues and fix it. Then I review the reviewed code, and it's leagues better than pre reviewed code.
This may be worth trying out.
Beautifully stated and I couldn't agree more. This is my experience.
you are holding it wrong. For real this time.
I use the "Superpowers" plugin that creates an initial spec via brainstorming together, and then takes that spec and creates an implementation spec file based on your initial spec. It also has other agents make sure the spec doesn't drift between those two stages and does its own self-reviews. Almost every time, it finds and fixes a bunch of self-review issues before writing the final plan. Then I take that final plan and run it through the actual execution phase that does its own reviews after everything.
Just saying that I know a lot of people like to raw dog it and say plugins and skills and other things aren't necessary, but in my case I've had good success with this.
That is why I still use the Chatbots and not the CLI/desktop tools. I am in 100% control. I mainly ask question surrounding syntax with languages I am not well experienced in, snippets/examples, and sometimes feedback on certain bits of logic.
I feel like I have easily multiplied my productivity because I do not really have to read more than a single chat response at a time, and I am still familiar with everything in my apps because I wrote everything.
I've been working on Window Manager + other nice-to-haves for macOS 26. I do not need a model to one-shot the program for me. However, I am thrilled to get near instantaneous answers to questions I would generally have to churn through various links from Google/StackOverflow for.
Then hand over to Claude Sonnet.
Well, there's your problem. Why aren't you using the best tool for the job?
> or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed)
Dude! The amount of ad-hoc, interface-specific DTOs that LLM coding agents define drives me up the wall. Just use the damn domain models!
You can quickly get something "working" until you realise it has a ton of subtle bugs that make it unusable in the long run.
You then spend months cleaning it up.
Could just have written it by hand from scratch in the same amount of time.
But the benefit is not having to type code.
I very much value and appreciate the first four paragraphs! [3] This is my favorite kind of communication in a social setting like this: it reads more like anthropology and less like judgment or overgeneralization.
The last two paragraphs, however, show what happens when people start trying to use inductive reasoning -- and that part is really hard: ...
> Therefore I need more time and effort with Gen AI than I needed before because I need to read a lot of code, understand it and ensure it adheres to what mental model I have.
I don't disagree that the above is reasonable to say. But it isn't all -- not even enough -- about what needs to be said. The rate of change is high, the amount of adaptation required is hard. This in a nutshell is why asking humans to adapt to AI is going to feel harder and harder. I'm not criticizing people for feeling this. But I am criticizing the one-sided-logic people often reach for.
We have a range of options in front of us:
A. sharing our experience with others
B. adapting
C. voting with your feet (cancelling a subscription)
D. building alternatives to compete
E. organizing at various levels to push back
(A) might start by sounding like venting. Done well it progresses into clearer understanding and hopefully even community building towards action plans: [1]> Hence Gen AI at this price point which Anthropic offers is a net negative for me because I am not vibe coding, I'm building real software that real humans depend upon and my users deserve better attention and focus from me hence I'll be cancelling my subscription shortly.
The above quote is only valid unless some pretty strict (implausible) assumptions: (1) "GenAI" is a valid generalization for what is happening here; (2) Person cannot learn and adapt; (2) The technology won't get better.
[1]: I'm at heart more of a "let's improve the world" kind of person than "I want to build cool stuff" kind of person. This probably causes some disconnect in some interactions here. I think some people primarily have other motives.
Some people cancel their subscriptions and kind of assume "the market and public pushback will solve this". The market's reaction might be too slow or too slight to actually help much. Some people put blind faith into markets helping people on some particular time scales. This level of blind faith reminds me of Parable of the Drowning Man. In particular, markets often send pretty good signals that mean, more or less, "you need to save yourself, I'm just doing my thing." Markets are useful coordinating mechanisms in the aggregate when functioning well. One of the best ways to use them is to say "I don't have enough of a cushion or enough skills to survive what the market is coordinating" so I need a Plan B!
Some people go further and claim markets are moral by virtue of their principles; this becomes moral philosophy, and I think that kind of moral philosophy is usually moral confusion. Broadly speaking, in practice, morality is a complex human aspiration. We probably should not not abdicate our moral responsibilities and delegate them to markets any more than we would say "Don't worry, people who need significant vision correction (or other barrier to modern life)... evolution will 'take care' of you."
One subscription cancellation is a start (if you actually have better alternative and that alternative being better off for the world ... which is debatable given the current set of alternatives!)
Talking about it, i.e. here on HN might one place to start. But HN is also kind of a "where frustration turns into entertainment, not action" kind of place, unfortunately. Voting is cheap. Karma sometimes feels like a measure of conformance than quality thinking. I often feel like I am doing better when I write thoughtfully and still get downvotes -- maybe it means I got some people out of their comfort zone.
Here's what I try to do (but fail often): Do the root cause analysis, vent if you need to, and then think about what is needed to really fix it.
[2]: https://en.wikipedia.org/wiki/Parable_of_the_drowning_man
[3]: The first four are:
I write detailed specs. Multifile with example code. In markdown.
Then hand over to Claude Sonnet.
With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.
So turns out that I'm not writing code but I'm reading lots of code.
Or just don't use AI to write code. Use it as a code reviewer assistant along with your usual test-lint development cycle. Use it to help evaluate 3rd party libraries faster. Use it to research new topics. Use it to help draft RFCs and design documents. Use it as a chat buddy when working on hard problems.
I think the AI companies all stink to high heaven and the whole thing being built on copyright infringement still makes me squirm. But the latest models are stupidly smart in some cases. It's starting to feel like I really do have a sci-fi AI assistant that I can just reach for whenever I need it, either to support hard thinking or to speed up or entirely avoid drudgery and toil.
You don't have to buy into the stupid vibecoding hype to get productivity value out of the technology.
You of course don't have to use it at all. And you don't owe your money to any particular company. Heck for non-code tasks the local-capable models are great. But you can't just look at vibecoding and dismiss the entire category of technology.