logoalt Hacker News

imiriclast Wednesday at 10:42 PM1 replyview on HN

I've come across that quote several times, and reach the same conclusion as you.

While I share Dijkstra's sentiment that "thinking machines" is largely a marketing term we've been chasing for decades, and this new cycle is no different, it's still worth discussing and... thinking about. The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming. It's frankly disappointing that such a prominent computer scientist and philosopher would be so dismissive and uninterested in this fundamental CS topic.

Also, it's worth contextualizing that quote. It's from a panel discussion in 1983, which was between the two major AI "winters", and during the Expert Systems hype cycle. Dijkstra was clearly frustrated by the false advertising, to which I can certainly relate today, and yet he couldn't have predicted that a few decades later we would have computers that mimic human thinking much more closely and are thus far more capable than Expert Systems ever were. There are still numerous problems to resolve, w.r.t. reliability, brittleness, explainability, etc., but the capability itself has vastly improved. So while we can still criticize modern "AI" companies for false advertising and anthropomorphizing their products just like in the 1980s hype cycle, the technology has clearly improved, which arguably wouldn't have happened if we didn't consider the question of whether machines can "think".


Replies

slfnflctdyesterday at 3:46 PM

> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming

It seems to me like too many people are missing this point.

Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.

The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.

One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.

show 1 reply