I wanted to test how far AI coding tools could take a production project. Not a prototype. A social media management platform with 12 first-party API integrations, multi-tenant auth, encrypted credential storage, background job processing, approval workflows, and a unified inbox. The scope would normally keep a solo developer busy for the better part of a year. I shipped it in 3 weeks.
Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: https://github.com/brightbeanxyz/brightbean-studio/tree/main...
I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.
I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.
Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.
Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.
One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.
The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.
Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.
I built something very, very similar for a client to post their content on schedule to about 9 different social networks. It was my first major vibe-coded app -- I normally vibe-code a function or small apps. Took about two hours with Claude [0] by just building up the functionality in layers, testing each layer as we went. If I'd rawdogged it, 2022-style, it would probably have taken me a month to write.
It's been running flawlessly for months without a single error. In fact, spooky good. I still feel nervous about it, though and check it every morning.
It is built mostly as a web app in .NET 10 Razor with SQLite db.
The APIs for the social networks are the hazy bit. As OP mentions, some are badly documented. Some are a pain to get access to. I was using a driven browser to post to Twitter, but they opened their API recently, which was nice.
[0] I used Claude in Github Copilot, so total cost was less than the $10/month in credits.
Thank you for this write up, this is much more interesting than all the "Show HN" that don't mention anything about AI but you can see it on every corner.
What you describe has also been my experience so far with building projects mostly with AI but with detailed specs but Rails instead of Django.
First, congrats on your accomplishment(s) and leveraging your AI+Python+WebDev talents.
Isn't this a SaaS-pocaplyse testament? What's stopping anyone from doing the same to BrightBean? What's stopping anyone with a little of domain knowledge and a $200+ Claude to clone your app and build yet another gap-filling, slightly improved content-syndication version and go-to-market? Is it worth taking it to the market when anyone can perpetuate the cycle?
I'm genuinely interested in knowing your thoughts.
That was an interesting article. I have a few questions about the workflow.
1. You mentioned developing tasks in parallel—how many agents were you actually running at the same time? Did you ever reach a point where, even if you increased the degree of parallelism, merging and reviews became the bottleneck, and increasing the number further didn’t speed things up?
2. I really relate to the idea of “80% of features in 20% of the time, then 80% on polish.” Did you use AI for this final polishing phase as well? In other words, did you show the AI screenshots of the screens and explain them? Also, when looking back, do you feel that if you had written the initial specifications more carefully, you could have completed the work faster?
This is amazing. I started doing the same, but I did not have the time to polish it.
Questions: why no X? Do you have a feature to resize (summarize?) to the text to fit into short boxes?
This is interesting, how do you publish to LinkedIn? I thought they didn't allow automated posts.
What did your harness look like for this?
Nothing wrong here, but Django/HTMX seem quite 'old' technologies to me for a new project made in 2026. Nowadays I use FastAPI/SQLAlchemy for the backend and SvelteKit on the frontend.
How much of the specs themselves came from the LLM? The development schedule https://github.com/brightbeanxyz/brightbean-studio/blob/main... has very AI-looking estimates for exampl and I can see a commit in the architecture.md file which is exclusively changing em-dashes to normal dashes (https://github.com/brightbeanxyz/brightbean-studio/commit/74...) which suggests you wanted to make it seem less LLM-generated?
I ask, not to condemn, but to find out what your process was for developing the requirements. Clearly it was done with LLM help but what was the refinement process?