AI is incredibly dangerous because it can do the simple things very well, which prevents new programmers from learning the simple things ("Oh, I'll just have AI generate it") which then prevents them from learning the middlin' and harder and meta things at a visceral level.
I'm a CS teacher, so this is where I see a huge danger right now and I'm explicit with my students about it: you HAVE to write the code. You CAN'T let the machines write the code. Yes, they can write the code: you are a student, the code isn't hard yet. But you HAVE to write the code.
I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."
It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.
What you as a teacher teach might have to adapt a bit. Teaching how code works is more important than teaching how to code. Most academic computer scientists aren't necessarily very skilled as programmers in any case. At least, I learned most of that after I stopped being an academic myself (Ph. D. and all). This is OK. Learning to program is more of a side effect of studying computer science than it is a core goal (this is not always clearly understood).
A good analogy here is programming in assembler. Manually crafting programs at the machine code level was very common when I got my first computer in the 1980s. Especially for games. By the late 90s that had mostly disappeared. Games like Roller Coaster Tycoon were one of the last ones with huge commercial success that were coded like that. C/C++ took over and these days most game studios license an engine and then do a lot of work with languages like C# or LUA.
I never did any meaningful amount of assembler programming. It was mostly no longer a relevant skill by the time I studied computer science (94-99). I built an interpreter for an imaginary CPU at some point using a functional programming language in my second year. Our compiler course was taught by people like Eric Meyer (later worked on things like F# at MS) who just saw that as a great excuse to teach people functional programming instead. In hindsight, that was actually a good skill to have as functional programming interest heated up a lot about 10 years later.
The point of this analogy: compilers are important tools. It's more important to understand how they work than it is to be able to build one in assembler. You'll probably never do that. Most people never work on compilers. Nor do they build their own operating systems, databases, etc. But it helps to understand how they work. The point of teaching how compilers work is understanding how programming languages are created and what their limitations are.
Not only that, it's constitution. I'm finding this with myself. After vibe coding for a month or so I let my subscription expire. Now when I look at the code it's like "ugh you mean now I have to think about this with my own brain???"
Even while vibe-coding, I often found myself getting annoyed just having to explain things. The amount of patience I have for anything that doesn't "just work" the first time has drifted toward zero. If I can't get AI to do the right thing after three tries, "welp, I guess this project isn't getting finished!"
It's not just laziness, it's like AI eats away at your pride of ownership. You start a project all hyped about making it great, but after a few cycles of AI doing the work, it's easy to get sucked into, "whatever, just make it work". Or better yet, "pretend to make it work, so I can go do something else."
When learning basic math, you shouldn't use a calculator, because otherwise you aren't really understanding how it works. Later, when learning advanced math, you can use calculators, because you're focusing on a different abstraction level. I see the two situations as very similar.
I remember reading about a metal shop class, where the instructor started out by giving each student a block of metal, and a file. The student had to file an end wrench out of the block. Upon successful completion, then the student would move on to learning about the machine tools.
The idea was to develop a feel for cutting metal, and to better understand what the machine tools were doing.
--
My wood shop teacher taught me how to use a hand plane. I could shave off wood with it that was so thin it was transparent. I could then join two boards together with a barely perceptible crack between them. The jointer couldn't do it that well.
I see junior devs hyping vibe coding and senior devs mostly using AI as an assistant. I fall in the latter camp myself.
I've hired and trained tons of junior devs out of university. They become 20x productive after a year of experience. I think vibe coding is getting new devs to 5x productivity, which seems amazing, but then they get stuck there because they're not learning. So after year one, they're a 5x developer, not a 20x developer like they should be.
I have some young friends who are 1-3 years into software careers I'm surprised by how little they know.
Same with essay assignments, you exercise different neural pathways by doing it yourself.
Recently in comments people were claiming that working with LLMs has sharpened their ability to organize thoughts, and that could be a real effect that would be interesting to study. It could be that watching an LLM organize a topic could provide a useful example of how to approach organizing your own thoughts.
But until you do it unassisted you haven’t learned how to do it.
I haven't done long division in decades, am probably unable to do it anymore, and yet it has never held me back in any tangible fashion (and won't unless computers and calculators stop existing)
LLMs are not bicycles for the mind. They are more like E-bikes. More assist makes you go faster, but provides less exercise.
https://www.slater.dev/2025/08/llms-are-not-bicycles-for-the...
They don't always do the simple things well which is even more frustrating.
I do Windows development and GDI stuff still confuses me. I'm talking about memory DC, compatible DC, DIB, DDB, DIBSECTION, bitblt, setdibits, etc... AIs also suck at this stuff. I'll ask for help with a relatively straightforward task and it almost always produces code that when you ask it to defend the choices it made, it finds problems, apologizes, and goes in circles. One AI (I forget which) actually told me I should refer to Petzold's Windows Programming book because it was unable to help me further.
"Why think when AI do trick?" is an extremely alluring hole to jump headfirst into. Life is stressful, we're short on time, and we have obligations screaming in our ear like a crying baby. It seems appropriate to slip the ring of power onto your finger to deal with the immediate situation. Once you've put it on once, there is less mental friction to putting it on the next time. Over time, gently, overuse leads to the wearer cognitively deteriorating into a Gollum.
This is a good point. Letting people learning to code to use AI, would be like letting 6 to 10 yo in school just use pocket calculators and not learn to do basic arithmetic manually. Yes IRL you will have a calculator at hand, yes, the calculator will make less mistakes, still, for you to learn und understand, you have to do it manually.
I agree 100%. But as someone with 25 years of development experience, holy crap it's nice not having to do the boring parts as much anymore.
Agreed. I think the divide is between code-as-thinking and code-as-implementation. Trivial assignments and toy projects and geeking out over implementation details are necessary to learn what code is, and what can be done with it. Otherwise your ideas are too vague to guide AI to an implementation.
Without the clarity that comes from thinking with code, a programmer using AI is the blind leading the blind.
The social aspect of a dialogue is relaxing, but very little improvement is happening. It's like a study group where one (relatively) incompetent student tries to advise another, and then test day comes and they're outperformed by the weirdo that worked alone.
Yes! You are best served by learning what a tool is doing for you by doing it yourself or carefully studying what it uses and obfuscates from you before using the tool. You don't need to construct an entire functioning processor in an HDL, but understanding the basics of digital logic and computer architecture matters if you're EE/CompE. You don't have to write an OS in asm, but understanding assembly and how it gets translated into binary and understanding the basics of resource management, IPC, file systems, etc. is essential if you will ever work in something lower level. If you're a CS major, algorithms and data structures are essential. If you're just learning front end development on your own or in a boot camp, you need to learn HTML and the DOM, events, how CSS works, and some of the core concepts of JS, not just React. You'll be better for it when the tools fail you or a new tool comes along.
But what has changed? Students never had a natural reason to learn how to write fizz buzz. It's been done before and its not even useful. There has always been a arbitrary nature to these exercises.
I actually fear more for the middle-of-career dev who has shunned AI as worthless. It's easier than ever for juniors to learn and be productive.
Lots of interesting ways to spin this. I was in a computer science course in the late 90s and we were not allowed to use the C++ standard library because it made you a "lazy programmer" according to the instructor. I'm not sure if I agree with that, but the way that I look at it is that computer science all about abstraction, and it seems to me that AI, generative pair programming, vibe coding or what ever you want to call it is just another level of abstraction. I think what is probably more important is to learn what are and are not good programming and project structures and use AI to abstract the boilerplate,. scaffolding, etc so that you can avoid foot guns early on in your development cycle.
Part of the issue here is that you can look at something and think "oh yeah I understand that, it makes perfect sense!", but then completely fail to reproduce it yourself.
I was so lucky to land in a CS class where we were writing C++ by hand. I don't think that exists anymore, but it is where I would go in terms of teaching CS from first principles
I'm not so sure. I spent A LOT of time writing sorting algo code by hand in university. I spent so much time writing assembly code by hand. So much more time writing instructions for MIPS by hand. (To be fair I did study EE not CS)
I learned more about programming in a weekend badly copying hack modules for Minecraft than I learned in 5+ years in university.
All that stuff I did by hand back then I haven't used it a single time after.
The problem is: now they also need to learn to code with an LLM assistant. That goes beyond "coding it by yourself". Well, it's different, anyway. Another skill to teach.
It feels like coding agents have just abstracted the old programming problem of "computers do what you tell them, not what you mean to tell them"
Sure (knowing the underlying ideas and having proficiency in their application) - but producing software by conducting(?) LLMs is rapidly becoming a wide, deep and must-have skill and the lack thereof will be a weakness in any student entering the workplace.
Similarly, it's always been the case that copy-pasting code out of a tutorial doesn't teach you as much much as manually typing it out, even if you don't change it. That part of the problem isn't even new.
AI does have an incredibly powerful influence on learning. It can absolutely be used as a detriment, but it can also be just as powerful of a learning tool. It all comes down to keeping the student in the zone of proximal development.
If AI is used by the student to get the task done as fast as possible the student will miss out on all the learning (too easy).
If no AI is used at all, students can get stuck for long periods of time on either due to mismatches between instructional design and the specific learning context (missing prereq) or by mistakes in instructional design.
AI has the potential to keep all learners within an ideal difficulty for optimal rate of learning so that students learn faster. We just shouldn't be using AI tools for productivity in the learning context, and we need more AI tools designed for optimizing learning ramps.
Yea, I doubt I could learn to program today if I started today.
> You CAN'T let the machines write the code
People said this about compilers. It depends what layer you care to learn/focus on. AI at least gives us the option to move up another level.
I just think it's like hitting the snooze button.
Yes, exactly. I'm having a frustrating time reminding senior teachers of this, people with authority who should really know better. There seems to be some delusion that this technology will somehow change how people learn in a fundamental way.
It doesn't PREVENT them from learning anything - said properly, it lets developers become lazy and miss important learning opportunities. That's not AIs fault.
As a teacher, do you have any techniques to make sure students learn to write the code?
Completely disagree. It’s like telling typists that they need to hand write to truly understand their craft. Syntax is just a way of communicating a concept to the machine. We now have a new (and admitidly imperfect) way of doing that. New skills are going to be required. Computer science is going to have to adapt.
I'm an external examiner for CS students in Denmark and I disagree with you. What we need in the industry is software engineers who can think for themselves, can interact with the business and understand it's needs, and, they need to know how computers work. What we get are mass produced coders who have been taught some outdated way of designing and building software that we need to hammer out of them. I don't particularily care if people can write code like they work at the assembly line. I care that they can identify bottlenecks and solve them. That they can deliver business value quickly. That they will know when to do abstractions (which is almost never). Hell, I'd even like developers who will know when the code quality doesn't matter because shitty code will cost $2 a year but every hour they spend on it is $100-200.
Your curriculum may be different than it is around here, but here it's frankly the same stuff I was taught 30 years ago. Except most of the actual computer science parts are gone, replaced with even more OOP, design pattern bullshit.
That being said. I have no idea how you'd actually go about teaching students CS these days, considering a lot of them will probably use ChatGPT or Claude regardless of what you do. That is what I see in the statistic for grades around here. For the first 9 years I was a well calibrated grader, but these past 1,5ish years it's usually either top marks or bottom marks with nothing in between. Which puts me outside where I should be, but it matches the statistical calibration for everyone here. I obviously only see the product of CS educations, but even though I'm old, I can imagine how many corners I would have cut myself if I had LLM's available back then. Not to mention all the distractions the internet has brought.
It’s like weightlifting: sure you can use a forklift to do it, but if the goal is to build up your own strength, using the forklift isn’t going to get you there.
This is the ultimate problem with AI in academia. We all inherently know that “no pain no gain” is true for physical tasks, but the same is true for learning. Struggling through the new concepts is essentially the point of it, not just the end result.
Of course this becomes a different thing outside of learning, where delivering results is more important in a workplace context. But even then you still need someone who does the high level thinking.