logoalt Hacker News

Aurornisyesterday at 11:34 PM32 repliesview on HN

The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:

> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.

> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.

For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.

The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.

I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.


Replies

ezsttoday at 6:52 AM

>> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.

> For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.

And this, this is the core/true/insightful story the executives will never hear about.

show 1 reply
hansvmyesterday at 11:44 PM

I mostly agree, but it's more appropriate to weigh contributions against an FTE's output rather than their input. If I have a $10m/yr feature I'm fleshing out now and a few more lined up afterward, it's often not worth the time to properly handle any minor $300k/yr boondoggle. It's only worth comparing to an FTE's fully loaded cost when you're actually able to hire to fix it, and that's trickier since it takes time away from the core team producing those actually valuable features and tends to result in slower progress from large-team overhead even after onboarding. Plus, even if you could hire to fix it, wouldn't you want them to work on those more valuable features first?

show 1 reply
throwaway2037today at 11:13 AM

Spot on. This is excellent analysis.

I was also bothered by this:

    > Until recently, I was rather skeptical of agentic code. February 2026, however, has been a sort of inflection point even stubborn developers like myself can’t ignore.
"February 2026" is just way to specific. It feels like a PR/marketing team wrote it. It acts like a jump scare in the post for any normie programmer.
show 1 reply
andaiyesterday at 11:45 PM

Yeah, it's like those posts "we made it 5,000x faster by actually thinking about what the code is doing."

show 2 replies
DrBazzatoday at 9:51 AM

Most of the other replies to this hit the nail on the head.

A human writing some poor, but working code that is supposed to be a demo, goes to production 9 times out of 10.

Then it becomes critical infrastructure.

Then management cannot understand why something working needs a rewrite because there's no tangible numbers attached to it. The timeless classic developer problem.

We were here ^^^^ up to 2024-2025.

Now, with LLMs, you can at least come up with a vibe coded, likely correct, likely faster, solution in a morning, that management won't moan at you about.

show 4 replies
endofreachtoday at 2:10 PM

> I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.

I agree. But most of the time the people responsible for the codebase / architecture do not want those questions raised. AI is greatly appreciated emergency exit for those situations. Apparently.

show 1 reply
SkyPunchertoday at 4:16 AM

In my experience, a lot of these types of migrations aren't incredibly deep in terms of actual code being written. It's about being able to assess all of the affected facets accurately. Once that's all mapped out, it's pretty straight forward to migrate.

andersmurphytoday at 8:17 AM

Wonder if the real value of LLMs/AI is similar to microservices in that it solves an organisational/culture problem.

In this case AI allowed the developer to make a change that the organisation would not have allowed. Regular rewrites don't let you signal to investors that you are AI ready/ascendant/agentic (whatever the latest AI hype term is) so would have been blocked. But, an AI rewrite.

show 1 reply
CalRoberttoday at 9:41 AM

"For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year."

You build something that's a dirty hack but it works, then your company grows, and nobody ever gets around to building it.

I was at a place spending over $4 million a year on redshift basically because someone had slapped together some bad (but effective!) queries when the company was new, and then they grew, and so many things had been built on top they were terrified to touch anything underneath.

show 1 reply
hobofanyesterday at 11:43 PM

> If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big.

The original is ~10k lines of JS + a few hundred for a test harness. You can probably oneshot this with a $20/month Codex subscription and not even use up your daily allowance.

wouldbecouldbetoday at 11:57 AM

Yeah that's the skeptical key point.

The practical key point is: if you want to do a large migration is to have a very good & extensive test suite that Claude is not allowed to change during the migration. Then Claude is extremely impressive and accurate migrating your codebase and needs minimal handholding. If you don't have a test suite, claude will be freewheeling all the way. Just did an extensive migration project, and should have focused on the test suite much more.

raincoletoday at 8:03 AM

> Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.

Will they? What makes you think so? If no one cared to improve it when it costed $300k/year, no one will care it when it's cheaper now.

show 1 reply
jackkinsellatoday at 7:46 AM

A more charitable explanation would be that they were under product pressure for more features and were never given the slack time to even explore this angle. Happens a lot.

heavyset_gotoday at 4:27 AM

I wonder how much it would have cost them if they weren't paying cloud rates for all of that, and they kept the same general inefficient architecture, sans the Kubernetes bloat.

Doubt they'd have a blog post to write about that, though.

pascahousuttoday at 10:46 AM

My understanding is that it is a common and sad phenomenon of the cloud era that systems are unnecessarily complex and costly relative to the actual computational requirements mandated by the actual volume at which the system is realistically going to be used. For example, it is very easy to have more microservices than users because bootstrapping complicated systems has never been as easy as it is now, but architecting good systems and finding the correct problems to solve is just as hard as it has ever been.

deckar01today at 12:46 AM

You aren’t accounting for managerial politics. A product manager won’t gamble on a large project to lower operating cost, when their bonus is based on customer acquisition metrics.

show 1 reply
arjietoday at 4:30 AM

I've seen it happen and it's usually just Normalization of Deviance in an organization that is focusing on something else. Someone needs some kind of functionality and Kube makes creating services trivial so they launch it into a different service[0]. Over time, while people are working on important things this thing occasionally has load issues so someone goes and bumps the maxReplicas up periodically. Eventually you come back to it a year later and maxReplicas is at 24 and you've removed the code paths for almost everything that is hitting the server except some inexplicable hot-loop.

Then you look at it and you're like "Jesus! What the fuck, I meant to have this be a stop-gap". I've done as bad when at near 100% duty-cycle. Often you're targeting just the primary thing that's blocking some revenue and if you get caught yak-shaving you're screwed. A year ago, I did one of these things because I was in the middle of two projects that were blocking a potential hundred-million in revenue.

A year down the line, Claude Opus 4.6 could have live-solved it. But Claude of that time would have required some time and attention and I was doing something else.

That engineering team is some 15 people strong and the company is at $400m+ revenue. If you saw the code, you'd wonder why anyone would have done something like this.

0: I once did this because some inscrutable code/library was tying us to an old runtime so I just encapsulated it in HTTP and moved it into a service.

staticassertiontoday at 2:32 PM

I could easily see this as a case where the team had a legacy area of code in a language that no one was familiar with anymore so no one felt great about actually contributing to it, so it languished, and now AI let them go "fuck it, let's just rewrite it".

faangguyindiatoday at 9:55 AM

I've worked many companies

Kubernetes, app engine, beanstalk all are huge money sink

All managed services like cloud datastore, firestore all tend to accure lots of costs if you've good size app.

These are quick to start when you don't have any traffic. Once traffic comes, you the cost drastically goes up.

You can always do better running your own services.

hiyertoday at 2:41 AM

I was thinking the same - if JSONata was a priority for them, why not choose a language with good support, like JS or Java? OTOH if development language was a priority why not choose a format that is well supported in it?

show 1 reply
Psyonictoday at 2:40 PM

Also don't miss that he had to do this work on the weekend...

otherme123today at 6:38 AM

>If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big.

It was "A few iterations and some 7 hours later - 13,000 lines of Go with 1,778 passing test cases."

show 1 reply
hperrintoday at 4:05 PM

Don’t forget that by using an AI, they don’t actually own the code. That’s public domain code now, since it can’t be protected by copyright.

cogogoyesterday at 11:42 PM

Think this is pure piggyback marketing on what cloudflare did with next.js. In my experience a company that raised $30MM a month ago is extremely unlikely to be investing energy in cost rationalization/optimization.

edit: saw the total raise not the incremental 30MM

pshirshovtoday at 10:17 AM

Engineers are afraid of writing custom parsers and interpreters.

hparadiztoday at 4:15 AM

I've been refactoring stuff with a $20 ChatGPT account.

show 1 reply
PunchyHamstertoday at 12:47 PM

Normally I'd say "Good architecture is far from requirement for profitable product, good enough is good enough, you can optimize later"

...but this is VC funded AI startup, the product might still be burning VC money on each customer ever after optimizing it.

neyatoday at 4:54 AM

No offence, but inexperienced JS fanatics always do this because of some weird affectionado they have for the language itself. Otherwise, even a decently qualified CTO would have chosen to keep everything in Go from the beginning or might have not waited until they were bleeding $300k. JS is also the worst possible language choice for this problem. So, it definitely sounds a bunch of script kiddies with fancy titles bought with VC money rather than actual experience.

show 1 reply
karel-3dtoday at 1:14 PM

The result is literally on github

https://github.com/RecoLabs/gnata

I have no idea what is JSONata. It seems it is not THAT hard to rewrite to go, just very tedious, and would cost more than 400 USD in developer time.

antonvstoday at 5:39 AM

Completely agree. We have > $50m from our most recent funding round, and even a cloud expense of $50k/year (in our case for storage) is considered a high priority to address. If it was $300k, our CTO would be running around with a butane torch setting everyone’s hair on fire until the problem was resolved.

But, venture funding does create a lot of weird inefficiencies which vary from company to company.

show 1 reply