Am I supposed to be impressed by this? I think people are now just using agents for the sake of it. I'm perfectly happy running two simple agents, one for writing and one for reviewing. I don't need to go be writing code at faster than light speed. Just focusing on the spec, and watching the agent as it does its work and intervening when it goes sideways is perfectly fine with me. I'm doing 5-7x productivity easily, and don't need more than that.
I also spend most of my time reviewing the spec to make sure the design is right. Once I'm done, the coding agent can take 10 minutes or 30 minutes. I'm not really in that much of a rush.
You can always tell claude to use red-green-refactor and that really is a step-up from "yeah don't forget to write tests and make sure they pass" at the end of the prompt, sure. But even better, tell it to create subagents to form red team, green team and refactor team while the main instance coordinates them, respecting the clean-room rules. It really works.
The trick is just not mixing/sharing the context. Different instances of the same model do not recognize each other to be more compliant.
This is Java EE all over again.
When I graduated in 2012 it was pushed everywhere, including my uni so my undergrad thesis was done in Java.
Everyone was learning it, certifying, building things on top of other things.
EJB, JPA, JTA, JNDI, JMS and JCA.
And them more things to make it even more powerful with Servlets, JSP, JSTL, JSF.
Many companies invested and built various application servers, used by enterprises by this day.
Every engineer I've met said Java is server side future, don't bother with other tech. You'll just draw data schema, persistence mapping, business logic and ship it.
I switched to C++ after Bjarne's talk I attended in 2013. I'm glad I did although I never worked as a software engineer. Following passion and going deep into technology was a bliss for me, the difference between my undergrad Java, Master C++ and Rust PhD is like a kids toy and a real turboprop engine.
Don't follow the hype - it will go away and you'll be left with what you've invested into.
I call this "Test Theatre" and it is real. I wrote about it last year:
https://benhouston3d.com/blog/the-rise-of-test-theater
You have to actively work against it.
Does anyone know what this guy is having his agents build? Bc I looked a bit and all I see him ship is linkedin posts about Claude.
It’s not yet possible due to context size limitations.
LLMs can’t retain most codebases nor even most code files accurately - they start making serious mistakes at ~500 lines.
Paste a ~200 line React component or API endpoint, have it fix or add something, it’s fine, but paste a huge file, it starts omitting pieces, making mistakes, and it gets worse as time goes on.
You have to keep reminding it by repeatedly refreshing context with the part in question.
Everyone who has seriously tried knows this.
For this reason alone the LLM “agent” is simply not one. Not yet. It cannot really drive itself and it’s a fundamental limitation of the technology.
Someone who knows more about model architecture might be able to chime in on why increasing the context size will/won’t help agents retain a larger working memory to acceptable degrees of accuracy, but as it stands it’s so limited that it works more like a calculator that you must actively use rather than an autonomous agent.
Been running 6 AI agents for my solo operation for a few months now. One does market research, another writes content, third handles video scripts. Not coding agents - business operations agents.
The overnight thing is real but overhyped. What actually works is giving agents very narrow tasks with clear success criteria. "Research top 10 Reddit threads about X and summarize pain points" works great. "Build me a feature" overnight is a coin flip.
Biggest lesson: the bottleneck moved from execution to context management. Getting agents to remember what matters and forget what doesn't is harder than the actual task delegation.
It's... really the same problem when you hire people to just write tests. A lot of time it just confirms that the code does what the code does. Having clear specs of what the code should do make things better and clearer.
At this stage, AI is no longer a tool that enhances your ability to ship code, it has replaced you entirely in that role. You don't control what is shipped, and you can't verify if it's correct. That's a serious problem! As software engineers, we remain accountable for code we no longer fully understand.
Then, what comes next feels less like a new software practice and more like a new religion, where trust has to replaces understanding, and the code is no longer ours to question.
I've been doing differential testing in Gemini CLI using sub-agents. The idea is:
1. one agent writes/updates code from the spec
2. one agent writes/updates tests from identified edge cases in the spec.
3. a QA agent runs the tests against the code. When a test fails, it examines the code and the test (the only agent that can see both) to determine blame, then gives feedback to the code and/or test writing agent on what it perceives the problem as so they can update their code.
(repeat 1 and/or 2 then 3 until all tests pass)
Since the code can never fix itself to directly pass the test and the test can never fix itself to accept the behavior of the code, you have some independence. The failure case is that the tests simply never pass, not that the test writer and code writer agents both have the same incorrect understanding of the spec (which is very improbable, like something that will happen before the heat death of the universe improbable, it is much more likely the spec isn't well grounded/ambiguous/contradictory or that the problem is too big for the LLM to handle and so the tests simply never wind up passing).
I guess to reach this point you have already decided you don't care what the code looks like.
Something I'm starting to struggle with is when agents can now do longer and more complex tasks, how do you review all the code?
Last week I did about 4 weeks of work over 2 days first with long running agents working against plans and checklists, then smaller task clean ups, bugfixes and refactors. But all this code needs to be reviewed by myself and members from my team. How do we do this properly? It's like 20k of line changes over 30-40 commits. There's no proper solution to this problem yet.
One solution is to start from scratch again, using this branch as a reference, to reimplement in smaller PRs. I'm not sure this would actually save time overall though.
Sounds like we've just gotten into lazy mode where we believe that whatever it spits out is good enough. Or rather, we want to believe it, and convince ourselves that some simple guardrail we put up will make it true, because God forbid we have to use our own brain again.
What if instead, the goal of using agents was to increase quality while retaining velocity, rather than the current goal of increasing velocity while (trying to) retain quality? How can we make that world come to be? Because TBH that's the only agentic-oriented future that seems unlikely to end in disaster.
I just don't understand where these people get all this money. The answer is often "oh it's just claude max" like man I don't have 200$ MONTHLY lying around ?? That's half my rent
Why would I ever want to book a course with someone who just realized weeks ago they don't know if the code does what they want if they don't look at it?
I've been impressed by Google Jules since the Gemini 3.1 Pro update. Sometimes it's been working on a task for 4h. I've now put it in a ralph loop using a Github Action to call itself and auto merge PRs after the linter, formatter and tests pass. It does still occasionally want my approval, but most of the time I just say Sounds great!
It's currently burning through the TESTING.md backlog: https://github.com/alpeware/datachannel-clj
Hm, what's actually being shipped here?
I've been playing around with agent orchestration recently and at least tried to make useful outputs. The biggest differences were having pipelines talk to each other and making most of the work deterministic scripts instead of more LLM calls (funnily enough).
Made a post about it here in case anyone is interested about the technicals: https://www.frequency.sh/blog/introducing-frequency/
> A few weeks ago I realized I had no reliable way to know if any of it was correct: whether it actually does what I said it should do.
I can't understand the mindset that would lead someone not to have realized this from the beginning.
And here I am turning my computer off at night for energy consumption, while others run a few extra ones for... for what, anyway? If you're working on problems real people are having (diseases, climate change, poverty, etc.) then sure, but exacerbating the energy transition for a blog post and your personal brand as OP seems to do? How's that not criminal
You can find approaches that improve things, but there's always going to be a chance that your code is terrible if you let an LLM generate it and don't review it with human eyes.
But review fatigue and resulting apathy is real. Devs should instead be informed if incorrect code for whatever feature or process they are working on would be high-risk to the business. Lower-risk processes can be LLM-reviewed and merged. Higher risk must be human-reviewed.
If the business you're supporting can't tolerate much incorrectness (at least until discovered), than guess what - you aren't going to get much speed increase from LLMs. I've written about and given conference talks on this over the past year. Teams can improve this problem at the requirements level: https://tonyalicea.dev/blog/entropy-tolerance-ai/
Remember this, guys? https://agilemanifesto.org/
Pet peeve: this post misunderstands “TDD.” What it really describes is acceptance tests.
TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice. It’s “red green refactor repeat”, and each step is only a handful of lines of code.
TDD is not “write the tests, then write the code.” It’s “write the tests while writing the code, using the tests to help guide the process.”
Thank you for coming to my TED^H^H^H TDD talk.
I'm running 8 specialized AI agents on a Mac Mini right now. They handle research, content strategy, writing, security audits, code, and visual design. They run on cron schedules, have persistent memory between sessions, and each one improves weekly through self-improvement loops.
The cost concern is real but manageable. The key is routing models by task. Complex reasoning gets Opus, routine work gets Sonnet, mechanical tasks get Haiku. Not everything needs the expensive model.
The quality concern is the bigger one. What people miss about autonomous agents is that "running unsupervised" doesn't mean "running without guardrails." Each of my agents has explicit escalation rules, a security agent that audits the others, and a daily health report system that catches failures. The agents that work best are the ones with built-in disagreement, not the ones that just pass things through.
Wrote up the full architecture here if anyone's curious about the multi-agent coordination patterns: https://clelp.com/blog/how-we-built-8-agent-ai-team
All these macho men - I wonder what exactly are they shipping at that pace?
Not a rhetoric question. Trillion token burners and such.
Code and Claude Code hooks can conditionally tell the model anything:
#!python
print(“fix needed: method ABC needs a return type annotation on line 45”
import os
os.exit(2)
Claude Code will show that output to the model. This lets you enforce anything from TDD to a ban on window.alert() in code - deterministically.
This can be the basis for much more predictable enforcement of rules and standards in your codebase.
Once you get used to code based guardrails, you’ll see how silly the current state of the art is: why do we pack the context full of instructions, distract the model from its task, then act all surprised when it doesn’t follow them perfectly!
The concept of long-running background agents sounds appealing, but the real challenge tends to be reliability and task definition rather than raw model capability.
If an agent runs unattended for hours, small errors compound quickly. Even simple misunderstandings about file structure or instructions can derail the whole process.
They're definitely inferior to proper tests, but even weak CC tests on top of CC code is an improvement over no tests. If CC does make a change that shifts something dramatically even a weak test may flag enough to get CC to investigate.
Even better though - external test suits. Recently made a S3 server of which the LLM made quick work for MVP. Then I found a Ceph S3 test suite that I could run against it and oh boy. Ended up working really good as TDD though.
Many times there is really no way of getting around some of the expert-human judgement complexity of the larger question of "How to get agents to build reliably".
One example I have been experimenting is using Learning Tests[1]. The idea is that when something new is introduced in the system the Agent must execute a high value test to teach itself how to use this piece of code. Because these should be high leverage i.e. they can really help any one understand the code base better, they should be exceptionally well chosen for AIs to use to iterate. But again this is just the expert-human judgement complexity shifted to identifying these for AI to learn from. In code bases that code Millions of LoC in new features in days, this would require careful work by the human.
[1] https://anthonysciamanna.com/2019/08/22/the-continuous-value...
Solo founder here, shipping a real product built mostly with AI. The code review thing is real but my actual daily pain is different. AI lies about being done. It'll say "implemented" and what it actually did is add a placeholder with a TODO comment. Or it silently adds a fallback path that returns hardcoded data when the real API fails, and now your app "works" but nothing is real.
I've also given it explicit rules like "never use placeholder images, always generate real assets" — and it just... ignores them sometimes. Not always. Sometimes. Which is worse, because you can't trust it but you also can't not use it.
The 80% it writes is fine. The problem is you still have to verify 100% of it.
I tried getting the ai to write the tests. It created placeholders that contained no code but returned a success.
Seems like QA is the new prompt engineering
I am getting started to claude projects... Any usefull things . . . worth knowing that saves free limits . . . . .
> Writing acceptance criteria is harder than writing a prompt, because it forces you to think through edge cases before you've seen them. Engineers resist it for the same reason they resisted TDD, because it feels slower at the start.
This resonates with my experience, and it is also a refreshing honest take: pushing back on heavy upfront process isn't laziness, it's just the natural engineers drive to build things and feel productive.
Wasn't the best practice to run one model/coding agent that writes the code and another one that reviews it? E.g. Claude Code for writing the code, GPT Codex to review/critique it? Different reward functions.
He admits the real hole himself: "this doesn't catch spec misunderstandings. If your spec was wrong to begin with, the checks will pass."
But there's a second problem underneath that one. Acceptance criteria are ephemeral. You write them before prompting, Playwright runs against them, and then where do they go? A Notion doc. A PR comment. Nowhere permanent. Next time an agent touches that feature, it's starting from zero again.
The commit that ships the feature should carry the criteria that verified it. Git already travels with the code. The reasoning behind it should too.
It's an interesting problem that even though it's represented by just you as a single person, I think this is shared across the board with larger corporations at scale. I know for example they were seeing this with game devs in regards to the Godot engine. So many people were uploading work done by AI that has been unverified that people just can't keep up with it. And maybe some of it's good, but how do you vet all the crap out? No one knows what's being written anymore (and non-devs can code now too, which is amazing, but part of the problem that we introduced). I think in the future of being a developer will be more about verifying code integrity and working with AI to ensure it is meeting said standards. Rather than actually being in the driver's seat. Not sexy, but we're handing the keys over willingly, yet, AI is only interpreting the intent. It's going to get things wrong no matter what we do.
In the end you'll always have to manually validate the output, to ensure that what the test case tests is correct. When you write a test case, that's always what you need to do, to ensure that the test case passes in the right conditions, and you have to test that manually.
Since you have to test that manually anyway, you can have AI write the code first; you test it; if it's the right result, you tell AI this is correct, so write test cases for this result.
Our app is a desktop integration and last year we added a local API that could be hit to read and interact with the UI. This unlocked the same thing the author is talking about - the LLM can do real QA - but it's an example of how it can be done even in non-web environments.
Edit: I even have a skill called release-test that does manual QA for every bug we've ever had reported. It takes about 10 hours to run but I execute it inside a VM overnight so I don't care.
Somewhat unrelated but are there good boilerplate/starter repos that are optimized for agent based development? Setting up the skills/MCPs/AGENTS.md files seems like a lot of work.
This is a really good article but I do kind take issue with the intro, because it's the same assertion I see all over the place :
> Changes land in branches I haven't read. A few weeks ago I realized I had no reliable way to know if any of it was correct: whether it actually does what I said it should do.
> I care about this. I don't want to push slop
They clearly didn't care about that. They only cared about non stop lines of code generation and shipping anything fast. Otherwise they wouldn't need weeks to realise that they weren't reading or testing this code - it's obvious from the outset.
Maybe their approach to this changed and that's fine, but at the beginning they very much did not care and I feel people only keep saying that do because otherwise they'd need to be the one to admit the emperor isn't wearing clothes.
> At some point you're not reviewing diffs at all, just watching deploys and hoping something doesn't break.
To everyone who plan on automating themselves out of a job by taking the human element out- this is the endgame that management wants: replacing your (expensive and non-tax-optimized) labor with scalable Opex.
This is TDD? Tests first, then code? I do first the docs, then the tests, then the code. For years.
What he describes is like that. Just that the plan step is suggesting docs, not writing actual docs.
> When Claude writes tests for code Claude just wrote, it's checking its own work.
You can have Gemini write the tests and Claude write the code. And have Gemini do review of Claude's implementation as well. I routinely have ChatGPT, Claude and Gemini review each other's code. And having AI write unit tests has not been a problem in my experience.
Regarding the self-congratulation machine - I simply use a different claude code session to do the reviews. There is no self-congratulation, but overly critical at times. Works well.
Honestly, sometimes the harnesses, specs, some predefined structure for skills etc all feel over-engineering. 99% of the time a bloody prompt will do. Claude Code is capable of planning, spawning sub-agents, writing tests and so on.
Claude.md file with general guidelines about our repo has worked extraordinarily good, without any external wrappers, harnesses or special prompts. Even the MD file has no specific structure, just instructions or notes in English.
the overnight cost thing is real. "$200 in 3 days" is actually pretty tame compared to what happens when you have agents spawning sub-tasks without a budget cap.
the part that doesn't get talked about enough: most people are hitting a single provider API and treating it as fixed cost. but inference pricing varies a lot across providers for the same model. we've seen 3-5x spreads for equivalent quality on commodity models.
so half the cost problem is architectural (don't let agents spin unboundedly) and the other half is just... shopping around. not glamorous but real.
The hardest part of running agents autonomously is the data quality problem. When your agent runs unsupervised, every decision is only as good as the data it pulls. Having agents access authoritative structured sources (government APIs, international org datasets) rather than scraping random pages makes a huge difference. The real failure mode is not hallucination - it is the agent confidently acting on unreliable data.
I am afraid that we are heading to a world in which we simply give up on the idea of correct code as an aspiration to strive for. Of course code has always been bad, and of course good code has never been a goal in the whole startup ecosystem (for perfectly legitimate reasons!). But that real production code, for services that millions or even billions of people rely on, should be reliable, that if it breaks that's a problem, this is the whole _engineering_ part of software engineering. And we can say: if we give that up we're going to have a whole lot more outages, security issues, all those things we are meant to minimize as a profession. And the answer is going to be: so what? We save money overall. And people will get used to software being unreliable; which is to say, people will not have a choice but to get used to it.
The cowboy gunslinging knows no bounds.
One thing I've been wrestling with building persistent agents is memory quality. Most frameworks treat memory as a vector store — everything goes in, nothing gets resolved. Over time the agent is recalling contradictory facts with equal confidence.
The architecture we landed on: ingest goes through a certainty scoring layer before storage. Contradictions get flagged rather than silently stacked. Memories that get recalled frequently get promoted; stale ones fade.
It's early but the difference in agent coherence over long sessions is noticeable. Happy to share more if anyone's going down this path.
This _all_ (waves hands around) sounds like alot of work and expense for something that is meant to make programming easier and cheaper.
Writing _all_ (waves hands around various llm wrapper git repos) these frameworks and harnesses, built on top of ever changing models sure doesn't feel sensible.
I don't know what the best way of using these things is, but from my personal experience, the defaults get me a looong way. Letting these things churn away overnight, burning money in the process, with no human oversight seems like something we'll collectively look back at in a few years and laugh about, like using PHP!