logoalt Hacker News

akoyesterday at 2:34 PM1 replyview on HN

I think i have enough control, probably more than when working with developers. Here's something i recently had claude code build: https://github.com/ako/backing-tracks

If you check the commit log, you'll see small increments. The architecture document is what i have it generate to validate the created architecture: https://github.com/ako/backing-tracks/blob/main/docs/ARCHITE...

Other than that most changes start with the ai generating a proposal document that i will review and improve, and then have it built. I think this was the starting proposal: https://github.com/ako/backing-tracks/blob/main/docs/DSL_PRO...

This started as a conversation in Claude Desktop, which it then summarized into this proposal. This i copied into claude code, to have it implemented.


Replies

NilMostChillyesterday at 3:01 PM

> I think i have enough control.

This is probably just a disagreement about the term "control", so we can agree to disagree on that one i suppose.

The rest of the reply doesn't really relate to any of the points i mentioned.

That it's possible to successfully use the tool to achieve your goals wasn't in dispute.

I'll try to narrow it down:

---

> You are not a victim at the mercy of your LLM.

Yes, you absolutely are, it's how they work.

As i said, you can suggest guidelines and directions but it's not guaranteed they'll be adhered to.

To be clear , this also applies to people as well.

---

Directing an LLM (or LLM based orchestration system) is not the same as directing a team of people.

The "interface" is similar in that you provide instructions and guidelines and receive an attempt at the wanted outcome.

However, the underlying mechanisms of how they work are so different that the analogy you were trying to use doesn't make sense.

---

Again, LLM's can be useful tools, but presenting them as something they aren't only serves to muddy the waters of understanding how best to use them.

---

As an aside, IMO, the sketchy salesmen approach to over-promising on features and obscuring the the limitations will do great harm to the adoption of LLM's in the medium to long term.

The misrepresentation of terminology is also contributing to this.

The term AI is intentionally being used to attribute a level of reasoning and problem solving capability beyond what actually exists in these systems.

show 1 reply