logoalt Hacker News

thisisbriansyesterday at 6:19 PM10 repliesview on HN

It is and will always be about: 1) properly defining the spec 2) ensuring the implementation satisfies said spec


Replies

nickjjyesterday at 6:34 PM

> properly defining the spec

Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.

I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.

I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.

show 2 replies
QuadrupleAyesterday at 7:53 PM

Side note, everyone's talking about having AI agents "conform to the spec" these days. Am I in my own bubble, or - who the hell these days gets The Spec as a well-formed document? Let alone a good document, something that can be formally verified, thouroughly test-cased, can christen the software "complete" when all its boxes are ticked, etc.?

This seems like 1980's corporate waterfall thinking, doesn't jibe with the messy reality I've seen with customers, unclear ideas, changing market and technical environments, the need for iteration and experimentation, mid-course correction, etc.

show 1 reply
bwestergardyesterday at 6:29 PM

That can't be the whole story, right? Because there are an arbitrarily large number of (e.g.) Rust programs that will implement any given spec given in terms of unit tests, types, and perhaps some performance benchmarks.

But even accounting for all these "hard" constraints and metrics, there are clearly reasons to prefer some possible programs over others even when they all satisfy the same constraints and perform equally on all relevant metrics.

We do treat programs as efficient causes[1] of side effects in computing systems: a file is written, a block of memory is updated, etc. and the program is the cause of this.

But we also treat them as statements of a theory of the problem being solved[2]. And this latter treatment is often more important socially and economically. It is irrational to be indifferent to the theory of the problem the program expresses.

[1]: https://en.wikipedia.org/wiki/Four_causes#Efficient

[2]: https://pages.cs.wisc.edu/~remzi/Naur.pdf

show 1 reply
krupanyesterday at 7:13 PM

Good sir, have you heard the Good Word of the Waterfall development process? It sounds like that's what you are describing

rawgabbityesterday at 6:23 PM

I had a CIO tell me 15 years ago with Agile I was wasting my time with specs and design documents.

show 1 reply
raizer88yesterday at 6:24 PM

AI: "Yes, the specs are perfectly clear and architectural standards are fully respected."

[Imports the completely fabricated library docker_quantum_telepathy.js and calls the resolve_all_bugs_and_make_coffee() method, magically compiling the code on an unplugged Raspberry Pi]

AI: "Done! The production deployment was successful, zero errors in the logs, and the app works flawlessly on the first try!"

ambicapteryesterday at 6:24 PM

Then pulling the lever until it works! You can also code up a little helper to continuously pull the lever until it works!

show 1 reply
dgxyzyesterday at 6:27 PM

Well it’s more how much we care about those.

Which with the advent of LLMs just lowered our standards so we can claim success.

CodingJeebusyesterday at 6:26 PM

Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly to the point that it will keep me up all night wanting to push further and further.

That's where the gambling metaphor really resonates. It's not whether or not the output is correct, I've been building software for many years and I know how direct LLMs pretty well at this point. But I'm also an alcoholic in recovery and I know that my brain is wired differently than most. And using LLMs has tested my ability to self-regulate in ways that I haven't dealt with since I deleted social media years ago.

show 2 replies
BurningFrogyesterday at 6:54 PM

That was always the easy part.

The endless next steps of "and add this feature" or "this part needs to work differently" or "this seems like a bug?" or "we must speed up this part!" is where 98% of the effort always was.

Is it different with AI coding?