> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
There's a version of this argument I agree with and one I don't.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
Additionally, the boredom atrophies any future collaboration. Why make a helpful library and share it when you can brute force the thing that it helps? Why make a library when the bots will just slurp it up as delicious corpus and barf it back out at us? Why refactor? Why share your thoughts, wasting time typing? Why collaborate at all when the machine does everything with less mental energy? There was a article recently about offloading/outsourcing your thoughts, i.e. your humanity, which is part of why it's all so unbelievably boring.
Online ecosystem decay is on the horizon.
AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
I agree for writing, but not for coding.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture. Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
Whoa there. Let's not oversimplify in either direction here.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Just this week I made a todo list app and a fitness tracking app and put them both on the App Store. What did you make?
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
No, AI makes it easier to get boring ideas to more than an elevator pitch or "Wouldn't it be cool..". Countless people right now are able to get that initial bit of excitement over an idea and instead of a little groundwork on various aspects of it.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
> I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
I vibe code little productivity apps that would take me months to make and now I can make them in a few days. But tbh talking to Google's Gemini is like taking to a drunk programmer; while solving one bug it introduces another bug and we fight back and forth until it realizes what and how needs to be fixed.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.
At least this CEO gets it. Hopefully more will start to follow.
Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
This aligns with an article I titled "AI can only solve boring problems"[0]
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
> Original ideas are the result of the very work you’re offloading on LLMs.
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
AI is a mirror. If you are boring, you will use AI in a boring way.
If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.
Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...
The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.
Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.
This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.
And AI denial makes you annoying.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
Look at the world Google is molding.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.