logoalt Hacker News

egl2020yesterday at 6:28 PM12 repliesview on HN

"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.


Replies

lich_kingyesterday at 7:09 PM

I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners.

On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.

I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.

show 4 replies
_doctor_loveyesterday at 7:05 PM

It always comes down to economics and then the person and their attitude towards themselves.

Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for.

I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself.

BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education.

That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves."

Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem.

show 2 replies
scott_syesterday at 7:15 PM

That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.

Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.

show 2 replies
wcfrobertyesterday at 7:08 PM

My solution to this is to prioritize. There isn't enough time in a person's life to learn everything anyways.

Selectively pick and struggle through things you want to learn deeply. And let AI spoon-feed you for things you don't care as much about.

show 1 reply
twodaveyesterday at 6:41 PM

I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.

For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.

In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.

We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.

That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.

And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.

colecutyesterday at 10:07 PM

AI gave you the option of making it happen without learning anything.

It also gives you an avenue to accelerate your learning if that is your goal.

mgraczykyesterday at 9:05 PM

I learn a lot faster now with LLMs.

You could learn the windows APIs much faster if you wanted to learn them

show 2 replies
dieselgateyesterday at 9:31 PM

Reminds some of something a friend said towards the end of college: “it’s only like 12 thousand dollars a year to learn everything there is to know”

Take it with a grain of salt..

show 1 reply
esafakyesterday at 8:03 PM

It is uncertain what will be valuable in the future at the rate things are changing.

tsunamifuryyesterday at 6:37 PM

Books are for the mentally enfeebled who can't memorize knowledge.

- Socrates

show 4 replies
doctorpanglossyesterday at 8:16 PM

I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs.

For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance.

All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game.

Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money.

One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana.

show 1 reply
aspenmartinyesterday at 6:59 PM

I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.

show 1 reply