This is a must-read series of articles, and I think Kyle is very much correct.
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
I fear that outside of cataclysmic global warfare or some sort of butlerian jihad (which amounts to the same) this genie is not going back into the bottle.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
This reminds me a bit of the ending of In the Beginning Was the Command Line:
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
There are definitely salient points in the article and I appreciate its value in imploring us to really stop and consider the ramifications of what this technology might deliver. I think the analogy to cars and the unintended consequences for all manner of society is particularly apt.
That said, the final point is one I take issue with:
> For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
To me, there is no intrinsic value in solving this problem other than rote problem solving reps to make you a better problem solver. There isn't anything fundamental about the protocol they've never heard of that operates the lights. It's likely similar to many other well-thought out protocols in the best case and in the worst case is something slapped together.
There are certainly deeper, more fundamental concepts to learn like congestion control algorithms in TCP. Most things in software though are just learning another engineer's preferences for how they thought to build something.
I poke at this because if an exercise only yields the benefit of another rep of solving a problem, then it holds less water to me. I personally don't think there will be fewer problems to solve with this technology, just a different sort at a different layer of the stack.
Here are the articles in this series that got significant HN discussion (in chronological order for a change):
ML promises to be profoundly weird* - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
* (That first title was different because of https://news.ycombinator.com/item?id=47695064 - as you can see, I gave up.)
p.s. Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
I think it is really easy for us to be dogmatic when talking about the future, as when we know what is going to happen, it quells our fears. I think, in reality, no one knows what is going to happen with AI. We are at a turning point in human history, and it is easy to blame Anthropic's engineers and tell them to quit their job, but the reality is that they are probably in the same position you are. There is no one true solution. We do not know if this is going to be analogous to automobiles - we don't know anything. I think it is courteous to think about these things before telling people to quit their jobs.
I agree with the general sentiment that the structure of society is going to change, but I don't know what the satisfying solution is. It's hard to imagine not participating will work, or even be financially viable for me, for long.
What the author is missing is that in his decision to limit the use of LLMs in his work, he omits the part where he “can”. E.g. he is resourceful and accomplished enough to be able to do the work he desires with no LLMs - but most people actually can’t. There are whole swaths of people software engineers that don’t write tests because “it slows them down” but they have never learned how to write testable code. And when thrust into an environment where they need to learn quickly - they don’t really have a way not to use ai, if they don’t someone else will, and take all the credit.
Learning how software is built is hard and gruelling work, and you need to constantly invest in yourself. Trouble is there is no time left to “go back to basics and learn FP” for example, because you also need to keep up with all the new LLM stuff happening on top of that.
It is easy for us who already have the foundational knowledge to be able to step back, take the wheel and try to do it ourselves, but plenty of people simply don’t have that option.
And I expect this trend to deepen and broaden. There will definitely be a lot more “witches” than actual engineers.
I think it's shortsighted to dismiss the utility of these technologies for learning. I find personally that putting the LLM into an argumentative state then having it challenge my assertions forces me to learn and develop my own thoughts and feelings more effectively than writing does these days, and I find that interrogating a model on a subject can teach me more about the subject per unit time than reading a textbook or research paper. Sometimes I'll even just have it read the raw text out loud- then interject and have a conversation about a specific thing that I don't have the domain knowledge to fully understand. Other times we'll end up off on a productive tangent.
Interactive learning and thinking is underrated, in part I think because of the cynical (and likely accurate) assumption about what the laziest among us will do with the tools, but projected onto everyone.
the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc...
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
The reasons laid out in this article are why it's so important to share how we are using AI and what we are getting in return. I've been trying to contribute towards a positive outcome for AI by tracking how well the big AI companies are doing at being used to solve humanitarian problems. I can't really do most of the suggestions the article, they seem like a way to slow progress. I don't want to slow AI progress, I want the technology we already have to be deployed for useful and helpful things.
Question: The author makes this article available on the web as a series of posts, a PDF (https://aphyr.com/data/posts/411/the-future-of-everything-is...) and an EPUB. What do you suppose their tooling looks like?
Is there a single "document containing all the words," and it updates the website, pdf, and epub whenever you change it?
What struck me was that the presentation is beautiful. It seems worth emulating. But that raises the question of what format you'd write your original words in. Do you suppose they just use Markdown files, or something more elaborate?
Two years ago, I was enjoying a drink with my wife, her friend, a very senior female VC partner, and another friend.
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
I've been thinking about this a lot recently, and I don't know if it is possible to stop. I've been thinking the most impactful thing would be to create open-source tools to make it easier to build agents on top of open-source models. We have a few open-source models now, maybe not as good as Gemini, but if the agent were sufficiently good, could that compensate?
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
The "Stop" part should have been expanded.
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
As a consequentialist who shares the author's concerns, I feel fine (ethically) using AI without advancing it. Foregoing opportunities meaningful to yourself for deontological reasons when it won't have any impact on society is pointless.
> Prospective clients ask Claude to do the work they might have hired me for
In all the 10 articles, I think this is the only thing really.
I think we have to learn how to overcome and thrive in the new world. The gravy of CS careers is gone for all :(
We are one technological breakthrough away from AGI. Seriously what happens when, for example, a viable room temperature superconductor (remember LK-99 lol) gets discovered? Next thing you know we have 3d stacked chips operating at THz speeds with virtually zero heat output, batteries that can charge instantly, etc.
I know a RTSC is the holy grail, but it really feels like AI is in the same stage computers were in the 80s. I used to be extremely bearish and think AI was useless, but I've taken a total 180 the last 6 months. If these things get better (they will), nobody's job will be safe.
The epilogue looked weak to me. The previous sections explored why it was essentially wrong to use current LLM technology, the answers can be wrong, or not even wrong, and why it has to be that way. The epilogue focus more in (our) obsolescence in a paradigm shift towards widespread LLM use scenario and not in them doing their work right or wrong.
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
I couldn't help but resonate with a lot of what Kyle says here.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
Eh, this has been a disappointing series to read from the person that wrote all those Jepsen write-ups and the technical interview series. All articles are mostly a regurgitation of all the negativity that gets aired here all the time (a lot of it already fixed or debunked) and 0 discussion of utility. This is more befitting Zitron than aphyr. Where's the sharp incisive wit and deep insights?
At least he realizes this technology is unlikely to slow down. With international relations as they are, it's MAD all over again, only the "D" is a fuzzy, hypothetical thing nobody can name, so even that bit of deterrence is lost. Yet finally he ends with the most uninspired advice of all: "we should try, unsuccessfully, to stop it."
Everyone must understand: for all of history, progress and productivity and value creation overall could only scale with people. Now it can scale with power and compute. This is a tremendous economic force, akin to a force of nature, that is nigh impossible to stop. (I always did think the Butlerian Jihad was the biggest plot hole in Dune.)
My advice is this: we have no choice but to adapt. We must realize that, by a stroke of luck, this is a power available to us more than the capital class. If they can scale without people, so can we. But because harnessing AI effectively requires hard skills -- at least for now -- that the capital class don't have and used to pay us for, we might even scale better than them!
Carpe diem.
From here in the Uk the site just says:
"Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives."
So I fired-up a vpn, and it appears to be a personal blog. About ai risks.
The geo-block is kind of a shame, as the writing is good and there appears to be nothing about the site that makes it subject to the OLSA.
I'm concerned that there's no real way to "opt out" of an AI future realistically. Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?
If there's too many lies, "source or gtfo" becomes more important
> I have never used an LLM for my writing, software, or personal life
Must be nice to not have a paycheck tied to using this tech. For many people, myself included, its either use it (adapt) or lose your job. Most of us relay on our jobs to pay bills and live in the modern world.
We should consider how we came to be so powerless. The cringe "people gave their lives for that flag" line is actually true, and we're trading it away for what? Not having to get out of our gaming chairs?
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
The future is uncertain, always has been. Is the future more uncertain than in the past?
I agree with much of the analysis, and originally I would have subscribed to the recommended action (resistance), at this point in time however, I think that advice is severely misguided.
We have already passed the critical point. The LLMs, the agent harnesses are here. There is too much willpower, capital, and risk behind these technologies now—the automobile has landed, thousands of people have purchased it already, protesting the car won't undo it at this point.
What you can do that will be meaningful, is to instead understand the new car, and understand it deeply, Use that understanding to carry the values you care about into the new world and re-articulate them. Make the car safer, push for tactical regulations on it. If you are privileged enough to be able to forgo its use entirely, sure, but that advice is not uniformly applicable. People forget that being able to simply opt-out of certain things is often only a viable option when you are already in a certain position. What we really need are the heavy skeptics to stop falling for luddite temptation and to start bringing their critical lens to bear in positive ways on this new technology to make it safer and better. By opting out and staging a feeble resistance you won't do anything other than let the current dangerous power consolidation continue.
From the article: "I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis."
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
Rudolph built his engine, Henry built his car, Popular Mechanics published it. 2000 biofueling stations across the nation. All made illegal by special interests months before the article was published. Information didn't move fast enough to let the editors know that innovation was illegal.
Author seems depressed, my personal take is that no one can change technological évolution. It's going to happen.
Just flow with it and all it's bullshit, yeah life will be a little worst but it will still be better than those who chose to completely ignore it.
If the world is going mad, be the craziest of all these crazy motherfucker. At least it's interesting, I'm very curious to know what the world will look like in 10 or 20 years.
Maybe, just maybe lol, we'll finally have this dreamed world where robots do all the work and we, human, can just enjoy ourselves 24/24.
> "Unavailable Due to the UK Online Safety Act. Now might be a good time to call your representatives."
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
my first thought reading this was that Ai might make coding more of an art or tradition. I felt admiration for those of us who still know how to write the now ancient coding languages of yonder.
But then i read this at the end:
> This piece, like most all my words and software, was written by hand—mainly in Vim. I composed a Markdown outline in a mix of headers, bullet points, and prose, then reorganized it in a few passes. With the structure laid out, I rewrote the outline as prose, typeset with Pandoc. I went back to make substantial edits as I wrote, then made two full edit passes on typeset PDFs. For the first I used an iPad and stylus, for the second, the traditional pen and paper, read aloud.
Then you realize the context of this article, who is writing it. No hate to my man here but clearly this is someone who has the desire and time to make things difficult for themselves and take pride it in it. It's needless effort in this day and age. but hey, to his own analogy, plenty of gearheads love their old cars and making em work. Those guys are some of the most knowledgeable, and I respect that, but also.. that same group is gonna hate on any new technology and complain it isn't the old way.
The idea that Claude might be able to help you change the color of your led lighting as a legitimate counter to things like a less usable world wide web, worse government services, the loss of human ability, etc. is excellent parody.
> And if I’m wrong, we can always build it later.
That's the rub: if we build it later, our economy crashes in the meantime.
I wonder if the author would advocate for us to stop driving cars as well.
Some people like roasting marshmallows. Others think that setting the house on fire may have downsides.
This article is a good example of how ideology can can lead people down irrational paths.
The comparison to automobiles changing streets is thrown around a lot. But I feel AI is fundamentally different. It is not a technological change like the internet which brought us huge amounts of opportunities in so many different directions. AI’s goal is to automate (in other words, replace) us.
Despite all the AI hype, I wonder how much it only exists in the tech bubble full of terminally online folks. Unless you spend significant part of your day online, most of the AI risks mentioned in this series are probably negligible. The most affected demographic is computer nerds that grew up enjoying utopian Web that is now turning dark.
One of the "lies" that concerns me is AI-generated music and its deterioration of the personal connection between musician and listener. As MCA from the Beastie Boys said, "If you can feel what I feel then it's a musical masterpiece." The listener feels a connection to the musician (and other people) with sad songs because everyone has felt sad, or with love songs because everyone has fallen in love, and so on. The listener can still get a feeling from AI-gen'ed music, but is it the same? What is the connection? Or, has that "connection" between musician and listener always been bullshit? That is, has it always been just about music triggering your brain to make you feel a certain way, and the source of that feeling really isn't what people care about - just give me a feeling?
We need Universal Basic Income UBI and we have the right to demand it:
- LLMs trained on OUR copyrighted works and OUR open source code which was licensed for human use (MIT license explicitly says for "Persons").
- A monetary system that has been centralizing opportunities and creating an asymmetric playing field due to the Cantillon Effect caused by government and institutional money creation.
Either of these points on its own entitles us to as much UBI money as we need.
I think even without AI or any technological progress, the monetary system is itself enough to create the kind of massive centralization that we've been seeing. People have been saying that for years before LLMs. People are now blaming AI for the fact that some people can't get jobs but it's not the root cause.
Software devs won't be able to get jobs as plumbers either because the plumbing sector in many countries has become insanely regulated... Society has been fundamentally corrupted.
I only see two ways forwards;
- Communism with UBI (closer to what we have now)
- Abolish all regulations and have Capitalism again.
We've recreated pre-enlightenment intellectual culture. Authority and logical consistency matter. Reality doesn't.
"carbon emissions" sneed
"I could retrain, but my core skills—reading, thinking, and writing—are squarely in the blast radius of large language models."
Yes.
For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly. Prior to 1800 or so, those skills were not all that useful to the average farmer. There were more smart people than jobs for them. Gradually, more jobs for smart people were developed, but not until WWII did the demand start to exceed the supply. Hence the frantic technical training efforts of WWII and the following college boom. This was the golden age of upward mobility.
It's hard to imagine this today. Read novels from the 18th century to get a feel for it. See who's winning and who's struggling, who rises and who falls, and why. Jane Austen's novels are a good start.
The nerds didn't take over until very late in the 20th century. There were very few rich nerds until then. Computing was once a very tiny world. You could not get rich working for IBM. The ones who left and got rich were in sales.
So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.