logoalt Hacker News

lbreakjaiyesterday at 12:40 PM16 repliesview on HN

We're going to do it again, aren't we? We're going to take something simple and sensible ("write tests first", "small composable modules", etc.), give it a fancy complicated name ("Behavior-Constrained Implementation Lifecycle pattern", "Boundary-Scoped Processing Constructs pattern", etc.), and create an entire industry of consultants and experts selling books and enterprise coaching around it, each swearing they have the secret sauce and the right incantations.

The damn thing _talks_. You can just _speak_ to it. You can just ask it to do what you want.


Replies

ElectricalUnionyesterday at 2:59 PM

Common business-oriented language (COBOL) is a high-level, English-like, compiled programming language.

COBOL's promise was that it was human-like text, so we wouldn't need programmers anymore.

The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.

The main lesson of COBOL is that it isn't the computer interface/language that necessitates a programmer.

show 3 replies
jerfyesterday at 8:44 PM

Worse yet, the problems are going to be real.

There's a lifecycle to these hype runs, even when the thing behind the hype is plenty real. We're still in the phase where if you criticize AI you get told you don't "get it", so people are holding back some of their criticisms because they won't be received well. In this case, I'm not talking about the criticisms of the people standing back and taking shots at the tech, I'm talking about the criticisms of those heavily using it.

At some point, the dam will break, and it will become acceptable, if not fashionable, to talk about the real problems the tech is creating. Right now there is only the tiniest trickle from the folk who just don't care how they are perceived, but once it becomes acceptable it'll be a flood.

And there are going to be problems that come from using vast quantities of AI on a code base, especially of the form "created so much code my AI couldn't handle it anymore and neither could any of the humans involved". There's going to need to be a discussion on techniques on how to handle this. There's going to be characteristic problems and solutions.

The thing that really makes this hard to track though is the tech itself is moving faster than this cycle does. But if the exponential curve turns into a sigmoid curve, we're going to start hearing about these problems. If we just get a few more incremental improvements on what we have now, there absolutely are going to be patterns as to how to use AI and some very strong anti-patterns that we'll discover, and there will be consultants, and little companies that will specialize in fixing the problems, and people who propose buzzword solutions and give lots of talks about it and attract an annoying following online, and all that jazz. Unless AI proceeds to the point that it can completely replace a senior engineer from top to bottom, this is inevitable.

show 1 reply
keedayesterday at 9:31 PM

I'm not sure what this comment is addressing, I didn't find any fancy terms in TFA? If it's the title of the article itself, it seems simpler than "Things that help writing code effectively with AI agents."

> You can just ask it to do what you want.

Yes, but very clearly, as any HN thread on AI shows, different people are having VERY different outcomes with it. And I suspect it is largely the misconception that it will magically "just do what you want" that leads to poor outcomes.

The techniques mentioned -- coding, docs, modularity etc. -- may seem obvious now, but only recently did we realize that the primary principle emerging is "what's good for humans is good for agents." That was not at all obvious when we started off. It is doubly counter-intuitive given the foremost caveat has been "Don't anthropomorphize AI." I'm finding that is actually a decent way to understand these models. They are unnervingly like us, yet not like us.

All that to say, AI is essentially black magic and it is not yet obvious how to use it well for all people and all use-cases, so yes, more exposition is warranted.

logicprogyesterday at 2:25 PM

I think the problem is that because it talks and understands English and more or less does whatever you ask, the affordences aren't particularly clear. That's actually one of the biggest problems with the chatbot model of AI — it has the same problems as the CLI, in that it's extremely flexible and powerful and you can do a lot with it and add a lot to it, but it's really not clear what way of interacting with it is more or less effective than any other, or what it can or can't do well.

I think attempts to document the most effective things to ask it to do in order to get to your overall goal, as well as what it is and is not good for, is probably worth doing. It would be bad if it turned into a whole consultant marketing OOP coaching clusterfuck. Yeah, but building some kind of community knowledge that these things aren't like, demigods, they have limitations and during things one way or the other with them can be better is probably a good thing. At the very least in theory would cut down some of the hype?

chasd00yesterday at 3:27 PM

> We're going to do it again, aren't we?

yes. It sucks but I think it's good for the next generation of tech industry employees to watch this. It's happening quickly so you get a 10 year timeline compressed into a few years which makes it easier to follow and expose. The bloggers will come, then speakers, then there will be books. Consultants will latch on and start initiatives at their clients. Once enough large enterprises are sold on it, there will come associations and certification bodies so a company can say "we have X certified abc on staff". Manifestos will be released, version numbers will be incremented so there's a steady flow of work writing books, doing trainings, and getting the next level certified.

This is standard issue tech industry stuff (and it probably happens everywhere else too) but compressed into a tighter timeline so you don't have to wait a decade to see it unfold.

show 1 reply
tptacekyesterday at 8:25 PM

Wait: "write tests first" isn't simple and it's controversial. The benefits of TDD in pure-human development are debatable (I'd argue, in many cases, even dubious). But the equation changes with LLMs, because the cost of generating tests (and of keeping them up to date) plummets, and test cases are some of the easiest code to generate and reason about.

It's not as simple an observation as you're making it out to be.

didgeoridooyesterday at 7:06 PM

I don’t know, Simon has had a pretty sane and level head on his shoulders on this stuff. To my mind he’s earned the right to be taken seriously when talking about approaches he has found helpful.

layer8yesterday at 9:01 PM

> create an entire industry of consultants and experts selling books and enterprise coaching around it

I suspect that this time around, management will expect the AI chatbot to explain these things to you, because who pays for anything anymore if the AI can do it all.

show 1 reply
monoosoyesterday at 1:48 PM

I'm confused. Are you criticising the article, or simply expressing concern for what may happen?

The context suggests the former, but your criticisms bear no relation to the linked content. If anything, your edict to "write tests first" is even more succinctly expressed as "Red/green TDD".

show 1 reply
MattGrommesyesterday at 8:44 PM

There's already BMAD - Breakthrough Method of Agile Agent Driven Development

Basically, it's Waterfall for Agents. Lots of Capitalized Words to signify something.

Also they constantly call it the BMAD Method, even though the M already stands for method.

JHeryesterday at 3:17 PM

If all I have todo is ask the thing what I want, where is all the great new software? Why isn't everyone running fully bespoke operating systems by now?

While I agree with the sentiment that we shouldn't make things more complicated by inventing fancy names, we also shouldn't pretend that software engineering has become super simple now. Building a great piece of software remains super hard to do and finding better techniques for it affords real study.

Your post is annoying me quite a bit because it's super unfair to the linked post. Simon Willison isn't trying to coin a new term, he's just trying to start a collection of useful patterns. "Agentic engineering" is just the obvious term for software engineering using agents. What would you call it, "just asking things"?

show 1 reply
fliryesterday at 12:53 PM

Has anyone staked a claim to "Agile AI" yet?

show 3 replies
solarkraftyesterday at 5:29 PM

> The damn thing _talks_. You can just _speak_ to it. You can just ask it to do what you want

I mean - yeah. So do humans. But it turns out that that a lot of humans require considerable process to productively organize too. A pet thesis of mine is that we are just (re-) discovering the usefulness of process and protocol.

SecretDreamsyesterday at 12:43 PM

> The damn thing _talks_. You can just _speak_ to it. You can just ask it to do what you want.

But can it pass the butter?

63stackyesterday at 1:46 PM

People are rushing to be the first one to coin something and hit it big. Imagine the amount of $$$ you could get for being an "expert ai consultant" in this space.

There was already another attempt at agentic patterns earlier:

https://agentic-patterns.com/

Absolute hot air garbage.

show 2 replies