I wonder how long npm/pip etc even makes sense.
Dependancies introduce unnecessary LOC and features which are, more and more, just written by LLMs themselves. It is easier to just write the necessary functionality directly. Whether that is more maintainable or not is a bit YMMV at this stage, but I would wager it is improving.
Interesting thought (I think recently more than ever it's a good idea to question assumptions) - but IMO abstractions are important as ever.
Maybe the smallest/most convenient packages (looking at you is-even) are obsolete, but meaningful packages still abstract a lot of complexity that IMO aren't easier to one-shot with an LLM
The most popular modules downloaded off pip and npm are not singular simple functions and cannot easily be rewritten by an llm.
Scikit-learn
Pandas
Polars
I consider packages over 100k download production-tested. Sure LLM can roll some by themselves but if many edge cases to appear, (which may already be handled by public packages) you will need to handle it.
At times I wonder why x tui coding agent was written in js/ts/python, why not use Go if it's mostly llm coded anyway? But that's mostly my frustration at having to wait for npm to install a thousand dependencies, instead of one executable plus some config files. There's also support libraries like terminal ui that differ in quality between platforms.
Well you do need to vet dependencies and I wish there was a way to exclude purely vibe coded dependencies that no human reviewed but for well established libraries, I do trust well maintained and designed human developed libraries over AI slop.
Don't get me wrong, I'm not a luddite, I use claude code and cursor but the code generated by either of those is nowhere near what I'd call good maintainable code and I end up having to rewrite/refactor a big portion before it's in any halfway decent state.
That said with the most egregious packages like left-pad etc in nodejs world it was always a better idea to build your own instead of depending on that.
As long as "don't roll your own crypto" is considered good advice, you'll have at least a few packages/libraries that'll need managing.
For a decent number of relatively pedestrian tasks though, I can see it.
This is like saying Wikipedia doesn't make sense because there's now Grokipedia
Tokens are expensive and downloading is cheap. I think probably the opposite is true, really, and more packages will be written specifically for LLMs to use because their api uses fewer tokens.
That was already the case for a lot of things like is-even.
You have insane delusions about how capable LLMs are but even assuming its somehow true: downloading deps instead of hallucinating more code saves you on tokens
best to write assembly instead.
What a bizarre comment. Take something like NumPy - has a hard dependency on BLAS implementations where numerical correctness are highly valued for accuracy and require deep thinking for correct implementation as well as for performance. Written in a different language again for performance so again an LLM would have to implement all of those things. What’s the utility in burning energy to regenerate this all the time when implementations already exist?