logoalt Hacker News

alphazardtoday at 1:08 AM20 repliesview on HN

There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software. In the end it will be the users sculpting formal systems like playdoh.

In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.


Replies

GuB-42today at 2:43 PM

> In the end it will be the users sculpting formal systems like playdoh.

And unless the user is a competent programmer, at least in spirit, it will look like the creation of the 3-year-old next door, not like Wallace and Gromit.

It may be fine, but the difference is that one is only loved by their parents, the other gets millions of people to go to the theater.

Play-Doh gave the power of sculpting to everyone, including small children, but if you don't want to make an ugly mess, you have to be a competent sculptor to begin with, and it involves some fundamentals that does not depend on the material. There is a reason why clay animators are skilled professionals.

The quality of vibe coded software is generally proportional to the programming skills of the vibe coder as well as the effort put into it, like with all software.

show 2 replies
thewebguydtoday at 4:28 PM

> In the end it will be the users sculpting formal systems like playdoh.

I’m very skeptical of this unless the AI can manage to read and predict emotion and intent based off vague natural language. Otherwise you get the classic software problem of “What the user asked for directly isn’t actually what they want/need.”

You will still need at least some experience with developing software to actually get anything useful. The average “user” isn’t going to have much success for large projects or translating business logic into software use cases.

Gudtoday at 5:33 PM

I love this optimistic take.

Unfortunately, I believe the following will happen: By positioning themselves close to law makers, the AI companies will in the near future declare ownership of all software code developed using their software.

They will slowly erode their terms of service, as happens to most internet software, step by step, until they claim total ownership.

The point is to license the code.

show 3 replies
Tade0today at 9:18 AM

> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.

Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.

show 10 replies
paulryanrogerstoday at 1:20 AM

This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

And that there is little value in reusing software initiated by others.

show 2 replies
lich_kingtoday at 8:26 PM

> There's an undertone of self-soothing "AI will leverage me, not replace me",

Which is especially hilarious given that this article is largely or entirely LLM-generated.

andrei_says_today at 6:45 PM

LLM technology does not have a connection with reality nor venues providing actual understanding.

Correction of conceptual errors require understanding.

Vomiting large amounts of inscrutable unmaintainable code for every change is not exactly an ideal replacement for a human.

We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

show 1 reply
veunestoday at 6:28 PM

Communication overhead between humans is real, but it's not just inefficiency, it's also where a lot of the problem-finding happens. Many of the biggest failures I've seen weren't because nobody could type the code fast enough, but because nobody realized early enough that the thing being built was wrong, brittle or solving the wrong problem

show 1 reply
thwartedtoday at 3:31 AM

> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.

show 1 reply
Abstract_Typisttoday at 6:58 PM

> it will be the users sculpting formal systems like playdoh.

People are pushing back against this phrase, but on some level it seems perfect, it should be visualized and promoted!

show 1 reply
mossTechniciantoday at 11:09 AM

Everybody in the world is now a programmer. This is the miracle of artificial intelligence.

- Jensen Huang, February 2024

https://www.techradar.com/pro/nvidia-ceo-predicts-the-death-...

show 4 replies
overgardtoday at 1:46 AM

Well, without the self soothing I think what's left is pitchforks.

falcor84today at 3:32 AM

> AI will leverage me

I think I know what you mean, and I do recall once seeing "this experience will leverage me" as indicating that something will be good for a person, but my first thought when seeing "x will leverage y" is that x will step on top of y to get to their goal, which does seem apt here.

its-kostyatoday at 2:45 PM

How does a single human acquire said "good taste" for architecting?

lp4v4ntoday at 6:20 PM

>In the end it will be the users sculpting formal systems like playdoh.

Yet another person who thinks that there is a silver bullet for complexity. The mythical intelligent machines that from poorly described natural language can erect flawless complex system is like the philosopher's stone of our time.

benreesmantoday at 3:57 AM

I'm rounding the corner on a ground's up reimplementation of `nix` in what is now about 34 hours of wall clock time, I have almost all of it on `wf-record`, I'll post a stream, but you can see the commit logs here: https://github.com/straylight-software/nix/tree/b7r6/correct...

Everyone has the same ability to use OpenRouter, I have a new event loop based on `io_uring` with deterministic playbook modeled on the Trinity engine, a new WASM compiler, AVX-512 implementations of all the cryptography primitives that approach theoretical maximums, a new store that will hit theoretical maximums, the first formal specification of the `nix` daemon protocol outside of an APT, and I'm upgrading those specifications to `lean4` proof-bearing codegen: https://github.com/straylight-software/cornell.

34 hours.

Why can I do this and no one else can get `ca-derivations` to work with `ssh-ng`?

show 1 reply
zombottoday at 10:54 AM

> I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.

A human might have taste, but AI certainly doesn't.

show 2 replies
teaearlgraycoldtoday at 10:02 AM

Well of course. In the long run AI will do almost all tasks that can be done from a computer.

TacticalCodertoday at 5:51 PM

> especially in the long run, at least in software

"at least in software".

Before that happens, the world as we know it will already have changed so much.

Programmers have already automated many things, way before AI, and now they've got a new tool to automate even more thing. Sure in the end AI may automate programmers themselves: but not before oh-so-many people are out of a job.

A friend of mine is a translator: translates tolerates approximation. Translation tolerates some level of bullshittery. She gets maybe 1/10th the job she used to get and she's now in trouble. My wife now does all he r SMEs' websites all by herself, with the help of AI tools.

A friend of my wife she's a junior lawyer (another domain where bullshitting flies high) and the reason for why she was kicked out of her company: "we've replaced you with LLMs". LLMs are the ultimate bullshit producers: so it's no surprise junior lawyers are now having a hard time.

In programming a single character is the difference between a security hole or no security hole. There's a big difference between something that kinda works but is not performant and insecure and, say, Linux or Git or K8s (which AI models do run on and which AI didn't create).

The day programmers are replaced shall only come after AI shall have disrupted so many other jobs that it should be the least of our concerns.

Translators, artists (another domain where lots of approximative full-on bullshit is produced), lawyers (juniors at least) even, are having more and more problems due to half-arsed AI outputs coming after their jobs.

It's all the bullshitty jobs where bullshit that tolerates approximation is the output that are going to be replaced first. And the world is full of bullshit.

But you don't fly a 767 and you don't conceive a machine that treats brain tumors with approximations. This is not bullshit.

There shall be non-programmers with pitchforks burning datacenters or ubiquitous UBI way before AI shall have replaced programmers.

That it's an exoskeleton for people who know what they're doing rings very true: it's yet another superpower for devs.

MattGaisertoday at 9:21 AM

> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

I am surprised at how little this is discussed and how little urgency there is in fixing this if you still want teams to be as useful in the future.

Your standard agile ceremonies were always kind of silly, but it can now take more time to groom work than to do it. I can plausibly spend more time scoring and scoping work (especially trivial work) than doing the work.

show 1 reply