logoalt Hacker News

AI makes you boring

481 pointsby speckxtoday at 6:12 PM272 commentsview on HN

Comments

solarisostoday at 6:39 PM

This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.

kazinatortoday at 7:57 PM

If actually making something with AI and showing it to people makes you boring ... imagine how boring you are when you blog about AI, where at most you only verbally describe some attributes of what AI made for you, if anything.

nickjjtoday at 6:46 PM

Look at the world Google is molding.

Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.

Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.

He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.

He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.

NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.

Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.

show 1 reply
maplethorpetoday at 8:53 PM

Just this week I made a todo list app and a fitness tracking app and put them both on the App Store. What did you make?

minimaxirtoday at 7:02 PM

> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.

This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.

As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:

https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)

https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)

It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.

crawshawtoday at 6:38 PM

It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.

Oarchtoday at 6:27 PM

Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.

show 1 reply
darodtoday at 7:24 PM

"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!

jason_ostertoday at 9:34 PM

Back in my day, boring code was celebrated.

grimgrintoday at 6:52 PM

I land on this thread to ctrl-f "taste" and will refresh and repeat later

That is for sure the word of the year, true or not. I agree with it, I think

show 1 reply
bcatanzarotoday at 8:39 PM

AI is a mirror. If you are boring, you will use AI in a boring way.

BurningFrogtoday at 6:49 PM

OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.

show 1 reply
notatoadtoday at 6:45 PM

i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.

derivative work might be useful, but it's not interesting.

gAItoday at 6:53 PM

I'm self-aware enough to know that AI is not the reason I'm boring.

turnsouttoday at 6:27 PM

I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.

Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.

Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.

mhurrontoday at 8:09 PM

Jokes on you, I was boring before AI.

Sol-today at 6:25 PM

The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.

As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?

I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)

show 2 replies
logicprogtoday at 6:36 PM

I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.

However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.

Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.

I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.

So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.

Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.

waldopattoday at 7:52 PM

Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.

Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...

ryandraketoday at 6:56 PM

Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.

themafiatoday at 7:55 PM

Or.. Only boring people use AI.

hnlmorgtoday at 6:38 PM

I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.

And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.

This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.

Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.

It’s so dumb.

stack_framertoday at 8:03 PM

AI also makes you bored.

abalashovtoday at 8:56 PM

Brilliant. You put into words something that I've thought every time I've seen people flinging around slop, or ideating about ways to fling around slop to "accelerate productivity"...

nickysielickitoday at 6:26 PM

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.

show 1 reply
himata4113today at 6:24 PM

I've actually ran into few blogs that were incredibly shallow while sounding profound.

I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.

redwoodtoday at 7:31 PM

AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.

Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.

Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.

How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.

tonymettoday at 6:48 PM

When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.

If you want to build something beautiful, nothing is stopping you, except your own cynicism.

"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.

AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.

5o1ecisttoday at 10:12 PM

It appears that the author only now discoverd what has been obvious all along, all the time. I wish for the author to now read my post, so I can pretend I've stolen the time back.

Most people are boring. Most people have always been boring. Most people are average and the average is boring. If you don't want to believe that, simply compare the amounts of boring-people to not-boring people. (Note: People might be amusing and appearing as not-boring, but still be boring, generic, average people).

It has actually nothing to do with AI. Most people around are, by default, not thinking deeply either. They barely understand anything beyond a surface level ... and no, it does not at all matter what it's about.

For example: Stupid doctors exist. They're not rare, but the norm. They've spent a lot of time learning all kinds of supposedly important things, only to end up essentially as a pattern matching machine, thus easily replaced by AI. Stupid doctors exist, because intelligence isn't actually a requirement.

Of course there exists no widely perceived problem in this regard, at least not beyond so called anecdotal evidence strongly suggesting that most doctors are, in fact, just as stupid as most other people.

The same goes for programmers. Or blog-posters. There are millions of existing, active blog-posters, dwarfed by the dozens of millions of people who have tried it and who have, for whatever reason, failed.

Of the millions of existing, active blog-posters it is impossible to make the claim that all of them are good, or even remotely good. It is inevitable that a huge portion of them is what would colloquially likely be called trash. As with everything people do, there is a huge amount of them within the average (d'uh) and then there's the outliers upwards, who everyone else benefits from.

Or streamers. Not everyone's an xQc or AsmonGold for a reason. These are the outliers. The average benefit from their existence and the rest is what it is.

The 1% rule of the internet, albeit the proportions being, of course, relative, is correct. [1]

It is actually rather amusing that the author assumes that MY FELLOW HUMANS are, by default, capable of deep thinking. They are not. This is not a thing. It needs to be learned, just like everything else. Even if born with the ability, people in general aren't being raised into utilizing it.

Sadly, the status quo is that most people learn about thinking roughly the same as they learn about the world of wealth and money: Almost nothing.

Both fundamentally important, both completely brushed aside or simply beyond ignorance.

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time

This is actually not true. It's the pause, after the immersion, which actually carries most of the weight. The pause. You can spend weeks learning about things, but convergence happens most effectively during a pause, just like muscles aren't self-improving during training, but during the pause. [2]

Well ... it's that, or marihuana. Marihuana (not all types, strains work for that!) is insanely effective for both creativity and also for simply testing how deeply the gathered knowledge converged. [3]

Exceptionally, as a Fun Fact, there are "Creativity Parties", in which groups of people smoke weed exactly for the purpose of creating and dismissing hundreds of ideas not worth thinking further about, in hopes of someone having that one singular grand idea that's going to cause a leap forward, spawned out of an artificially induced convergence of these hundreds of others.

(Yes, we can schedule peak creativity. Regularly. No permanent downsides.)

Anyhow, here's a brutal TLDR:

No, I'm not boring. You are. Evidently so!

Your post literally oozes irony.

-----

[1] https://www.perplexity.ai/search/is-this-correct-for-testing...

[2] https://www.perplexity.ai/search/is-this-correct-for-testing...

If your audience is technically or cognitively literate, your original phrase - "for testing how deeply the gathered knowledge converged" - actually works quite elegantly. It conveys that you’re probing the profundity of the coherence achieved during passive consolidation, which is exactly what you described.

[3] https://www.perplexity.ai/search/is-this-correct-for-testing...

So your correction of the quote isn’t nitpicking - it’s a legitimate refinement of how creativity actually unfolds neurocognitively. The insight moment often follows disengagement, not saturation.

-----

JimmaDaRustlatoday at 9:03 PM

Bro, I'm a software developer, it's not the fucking AI making me boring.

add-sub-mul-divtoday at 6:24 PM

Also sounds likely that it's the mediocre who gravitate to AI in the first place.

PaulHouletoday at 7:43 PM

Try the formulation "anything about AI is boring."

Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.

You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.

If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.

Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.

aaroninsftoday at 7:58 PM

Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,

I would say this is a fine time to haul out:

Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.

Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.

Lemma: contemporary implementations have already improved; they're just unevenly distributed.

I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."

I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.

show 1 reply
artzev_today at 9:10 PM

[dead]

artzev_today at 9:07 PM

[dead]

sarmasamosarmatoday at 7:15 PM

[dead]

ghostclaw-csotoday at 6:52 PM

[dead]

ai4prezidenttoday at 6:44 PM

[dead]

clinttoday at 6:49 PM

Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?

hhsueytoday at 6:49 PM

Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.

wagwangtoday at 6:21 PM

Isnt this just flat out untrue since bots can pass turing tests

show 3 replies
stuckinhelltoday at 6:20 PM

I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.

show 1 reply
apexalphatoday at 6:48 PM

Meh.

Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.

I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.

Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.

show 1 reply
elliotbnvltoday at 6:26 PM

I was onboard with the author until this paragraph:

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.

There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).

They also strawman usage of LLMs:

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.

I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.