logoalt Hacker News

isollitoday at 3:16 PM16 repliesview on HN

I try to be open-minded and understanding, but I don't understand this:

> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.

> The most frequent [delusion] is the belief that they have created the first conscious AI.

How can you seriously think you've created something when you're just using someone else's software?


Replies

terafloptoday at 3:50 PM

Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.

If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".

At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.

Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.

show 2 replies
chromacitytoday at 5:48 PM

> How can you seriously think you've created something when you're just using someone else's software?

It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.

I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.

ahhhhnooootoday at 3:33 PM

Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."

show 1 reply
TYPE_FASTERtoday at 4:00 PM

> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.

I think social isolation can be a factor here.

show 1 reply
roywigginstoday at 6:13 PM

> How can you seriously think you've created something when you're just using someone else's software?

Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.

show 2 replies
PhilipRomantoday at 3:25 PM

I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...

show 2 replies
staticassertiontoday at 3:34 PM

I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.

data-ottawatoday at 3:24 PM

A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.

There’s also lots of stuff about quantum consciousness that is in the training data.

tiborsaastoday at 4:10 PM

> How can you seriously think you've created something when you're just using someone else's software?

If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.

Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.

rwctoday at 3:24 PM

The unrelenting human belief that one is special, unique, and capable of things no one else is.

show 1 reply
46493168today at 7:38 PM

> How can you seriously think you've created something when you're just using someone else's software?

This is the nature of delusion

stackghosttoday at 4:13 PM

>How can you seriously think you've created something when you're just using someone else's software?

People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.

I think you might be overestimating the critical thinking abilities of the average person.

mock-possumtoday at 3:20 PM

It’s mental illness. Like a drug trip you don’t sober up from (without treatment)

collingreentoday at 3:19 PM

Well, delusion is right there in the name.

bueschertoday at 3:30 PM

Because it told you so!