logoalt Hacker News

Animatsyesterday at 10:09 PM4 repliesview on HN

The OP has an amusing side point - LLMs have automated sucking up to management. There is a large market for that.

His main point, though, is this:

I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.

I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.

It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.

This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.


Replies

beachyyesterday at 10:37 PM

My friend built a construction management SaaS entirely via Claude.

It looked damned impressive, and it kind of worked to demo, but he is in no way a programmer, though he understood the problem domain very well. I asked a few basic questions:

- where is the data stored?

- How would you recover from a database failure?

- does it consume tokens at runtime?

- what is the runtime used at the back end?

- why are the web pages 3M in size and take forever to load?

He had no idea.

It's a typical vibe coding scenario, and people like to paint this as why vibe sucks.

I think however that all that is needed to bridge the gap is some very simple feedback from an expert at the right time.

For example to someone who knows about databases, its pretty easy to look at a database schema and spot stuff that looks off - denormalised data, weird columns. That takes 10 minutes, and the feedback could be given directly to the LLM.

Likewise someone who knows a little about systems architecture could make sure at the outset that some good practices are followed, e.g.:

- "I want your help to build this system but at runtime I do not want to consume any tokens."

- "I want the system to store its data in Postgres (or whatever) and I want documented recovery plans if the database craps itself".

- "I want web pages to, as much as possible, load and render as quickly as possible, and then pull data in from the back end, with loading indicators showing where the UI was not yet up to date".

show 6 replies
JSR_FDEDtoday at 3:14 AM

There’s no need to defend LLMs. The article is making the point that a colleague who shouldn’t have been anywhere near specifying work for LLMs to do, was able to fake it and get rewarded for it.

stellalotoday at 5:40 AM

It doesn’t look like OP or the specific paragraph is describing an LLM problem, but rather a people problem

amosstoday at 6:41 AM

The details might bury his point rather than illustrate it. The driving theme throughout seems to be that a tool tuned for correct syntax, with deep understanding of semantics will look like a Dunning-Kruger machine. The specific errors that the author's colleague was oblivious to don't add any weight to that general point, they only explain one specific instance. It's classic omega-consistency.