logoalt Hacker News

impulser_today at 9:27 PM7 repliesview on HN

I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.

I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.

These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.

This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.


Replies

mitchellhtoday at 9:38 PM

Correct. I use AI a ton and I'm having more fun every day than I ever did before thanks to it (on average, highs are higher, lows are lower). Your characterization is all very accurate. Thank you.

Here's some other topics I've written on it:

- https://mitchellh.com/writing/my-ai-adoption-journey

- https://mitchellh.com/writing/building-block-economy

- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)

show 2 replies
biophysboytoday at 9:56 PM

The way I put this to myself is that AI gives “correct correct answers and incorrect correct answers”.

They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.

Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.

show 1 reply
jas-today at 10:03 PM

I digress; this article actually has helped identify useful knowledge gaps around topics I have researched. https://drensin.medium.com/elephants-goldfish-and-the-new-go...

While you have to think about things objectively no matter what, when I start researching topics like physics, using AI as suggested in that article has proven very useful.

com2kidtoday at 10:00 PM

I wonder how different this is from having companies let Fortune or Inc magazine do their thinking for them.

Or random consultants.

Is "AI said it was a good idea" and worse than "we were following industry trends"?

kakugawatoday at 9:33 PM

He uses AI himself, so I agree he doesn't see AI use as black/white.

Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.

show 1 reply
lovichtoday at 9:55 PM

I didn’t think just offloading your thinking to AI was AI psychosis.

To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.

But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.

show 1 reply
slopinthebagtoday at 9:30 PM

> companies and people outsourcing their decision making and thinking to AI

It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".

So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.

show 1 reply