logoalt Hacker News

SwellJoelast Tuesday at 9:19 PM3 repliesview on HN

Inventing a new thing "for agents" always feels counter-productive. Your new thing isn't in the training data, so you have to teach it how to use your thing. Why not use tech that's already in the training data? Agents know Python and Django. Or, better (because the performance, maintainability, and deployment story are much nicer with no extra work, since agents write the code), agents know Go.

The very nature of LLMs means you can't invent a thing for current agents to use that they'll be better at using than the things they already know how to use from their immense training data. You can give them skills, sure, and that's useful, but it's still not their native tongue.

To make a thing that's really for agents, you need to have made a popular thing for humans ten years ago, so there's a shitload of code and documentation for them to train on.


Replies

mritchie712last Tuesday at 9:46 PM

this was true a year ago, but if you give an agent a new spec to follow (e.g. a .md file), it will follow it.

we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.

while I agree you don't need to build a new thing "for agents", you can get them to understand new things, that are not in the training data, very easily.

show 1 reply
rhubarbtreelast Wednesday at 7:54 AM

Yeah this is just wrong.

The whole point of AI is that it can generalise to stuff outside its training set, and anyone who uses Claude on a daily basis completes tasks that have not already been completed elsewhere.

These models excel at tool use. They’re using CRMs, word processors and dozens of other systems that weren’t programmable before - lots of tools have opened MCP/API/CLI interfaces for the first time specifically to support AI, and it works.

I don’t know where this meme comes from, but we haven’t “invented the last language” and we’re not going to be frozen in 2023 for tooling, any more than the Industrial Revolution led to automation of artisan workshops rather than the invention of the modern factory system.

show 1 reply
gbibaslast Wednesday at 7:28 AM

This makes sense, but it also causes concern. With AI whether it is content or programming, losing the new novel approaches which may wind up being better in the long run, get shut down for expediency in the short run. This is nothing new and not AI specific behavior, large comoanies have been doing this forever, but it leads to a death of innovation and a spiral inward of self reinforcing loops. You are absolutely right that llms won’t know it and will need to learn something like this all over, but they are good at that and if we stop to find better patterns (which is what humans are great at doing) we keep creativity alive and find meaning while making our work more productive in the long term.

show 1 reply