Yeah this is just wrong.
The whole point of AI is that it can generalise to stuff outside its training set, and anyone who uses Claude on a daily basis completes tasks that have not already been completed elsewhere.
These models excel at tool use. They’re using CRMs, word processors and dozens of other systems that weren’t programmable before - lots of tools have opened MCP/API/CLI interfaces for the first time specifically to support AI, and it works.
I don’t know where this meme comes from, but we haven’t “invented the last language” and we’re not going to be frozen in 2023 for tooling, any more than the Industrial Revolution led to automation of artisan workshops rather than the invention of the modern factory system.
That's not what I'm saying. I'm saying that if you make "Django, but different" it isn't for agents.
Django, but different is not a "tool use" situation. It is a framework with a ton of conventions and libs, etc. Agents will be better able to write Django than "Django, but different". Will they work with your new libraries? Of course. They're very good at all sorts of coding tasks, and they can read docs, search the web, experiment, and correct themselves in an agentic context even absent any relevant training data. But, what may have been a one-shot with Django code, might require several tries with your new thing.
That is not an argument against making new things. I'm not make any argument against making new things, anywhere in this thread. My argument is that if you make "Django, but different", it isn't "for agents", because agents already know Django and they know your new thing considerably less. Your new thing is more work for the agent.
My comment is about being honest with yourself and others about what you're building and for whom.