I'm presuming you don't like AI (apologies if I'm mistaken).
Just because someone doesn't like some tool doesn't mean someone can't learn using the new tool or method.
Users of an old technology often adopt a hostile disposition of a new technology that threatens their skill. To claim people can't learn at a higher level of abstraction is absurd. Kids with motivation are smart, and they will outpace the older generation.
If I had the advantage of LLMs and agentic coding when I was a teenager, I could have gone wider and deeper in my career. I'm jealous that young learners are going to be able to do more than I could at their age. I'm happy for them.
If you're using AI to vibe code, then editing the results - that's learning. Period. That's a feedback loop. And it's probably more interesting and rewarding than what we had.
I use chatbots a lot to explore things that I am not so well versed with, about coding and other things. As an entry point it's very useful. LLMs also help to discover more advanced knowledge that are opaque otherwise. Of course using and LLM can be a useful learning process. It would have helped me in my early career too.
If you never code yourself, I don't think your muscle memory will adapt to what you learn. This is practically the same for me when I read a language reference, I read it, think I got everything and then I open my editor and can't type, I have to go back and read up every bit that I want to type. So the problem is probably not even LLM specific, it's just the lack of repeated typing. And yes, I think even with LLMs manual typing is useful. Often very subtle things are hard to explain and easier to type. If you don't have it in your muscle memory, you are less efficient.
I am not convinced that vibe coding will teach you the right things. Writing code is one thing, making good decisions is a whole another level. You learn that only by failing over and over. A beginner wouldn't even understand his own architecture and data structures he generated, so he wouldn't understand why he failed or how to improve. LLMs also respond very varying on the "right" way to deal with problems. I often disagree with them; they may have incomplete knowledge or just prefer their overtrained "best practices" or worse they just give different answers based on statistical variance. If you need any decision, they are good, if you need a quality decision that is perfectly suited to your constraints, the require a lot of instructions and will still fail.
I don't hate AI, I hate that some people are very naive about it's usage and usefulness. I don't see that AI threatens my skill, it probably threatens parts of the things I've delivered in the past. But to be honest, those were the boring parts. Let the vibe coders do them. But if you really think/hope that LLMs will excel certain coding tasks, then you should be wary to specialize in them. Because one day, they wont need your help anymore.