The problem is that that is an incorrect interpretation of the study. The entire task of that study was specifically to learn a brand new asynchronous library that they hadn't had experience with before. As a group on average, those who used AI failed to learn how to use explain and debug that async library as well as those who hadn't used AI on average had, but that doesn't mean they lost pre-existing skills. It's literally in the study title: "skill formation", not skill practice, maintenance, or deterioration.
I think it's also extremely worth pointing out that when you break down the AI using group by how they actually used AI, those who had the AI both provide code and afterwards provide a summary of the concepts and what it did actually scored among the highest. The same for ones who actually use the AI to ask it questions about the code it generated after it generated that code. Which seems to indicate to me that as long as you're having the AI explain and summarize what it did after each badge of edits. And you're also using it to explore and explain existing code bases. You're not going to see this problem.
I'm so extremely tired of people like you who want to engage in this moral panic completely misinterpreting these studies
Point taken. Still, isn’t an activity like learning a new library, language, or platform a fundamental part of being a software developer? Haven’t we all complained at some point about companies hiring react developers because we all know the real skill is the ability to pick up new things. And to be clear, this isn’t moral panic, it’s a concern that we may end up in a future where people don’t know how systems work anymore and we are dependent on two or three companies and their data center moats to maintain any technology.