logoalt Hacker News

Lercyesterday at 6:37 PM6 repliesview on HN

One of my formative impressions of AI came from the depiction of the Colligatarch from Alan Dean Foster's The I Inside.

The AI in the book is almost feels like it is the main message masquerading as a subplot.

Asimov knew the risks, and I had assumed until fairly recently that the lessons and explorations that he had imparted into the Robot books had provided a level of cultural knowledge of what we were about to face. Perhaps the movie of I Robot was a warning of how much the signal had decayed.

I worry that we are sociologically unprepared, and sometimes it seems wilfully so.

People discussed this potential in great detail decades ago, Indeed the Sagan reference at the start of this post points to one of the significant contributors to the conversation, but it seems by the time it started happening, everyone had forgotten.

People are talking in terms of who to blame, what will be taken from me, and inevitability.

Any talk of a future we might want dismissed as idealistic or hype. Any depiction of a utopian future is met with derision far too often. Even worse the depiction can be warped to an evil caricature of "What they really meant".

How do we know what course to take if we can't talk about where we want to end up?


Replies

nemomarxyesterday at 8:05 PM

I think people broadly feel like all of this is happening inevitably or being done by others. The alignment people struggle to get their version of AI to market first - the techies worry about being left behind. No one ends up being in a position to steer things or have any influence over the future in the race to keep up.

So what can you and I do? I know in my gut that imagining an ideal outcome won't change what actually happens, and neither will criticizing it really.

show 3 replies
cheschiretoday at 1:28 AM

My interpretation is that Asimov assumed that humans would require understanding at the deepest levels of artificial intelligence before it could be created. He built the robot concepts rooted in the mechanical world rather than the world of the integrated circuit.

He never imagined, I suppose, that we would have the computing power necessary to just YOLO-dump the sum of all human knowledge into a few math problems and get really smart sounding responses generated in return.

The risks can be generalized well enough. Man’s hubris is its downfall etc etc.

But the specific issues we are dealing with have little to do with us feeling safe and protected behind some immutable rules that are built into the system.

show 2 replies
majormajortoday at 3:10 AM

We've had many decades of technology since Asimov started writing about robots, and we've seen almost all of it used to make the day-to-day experience of the average worker-bee worse. More tracking. More work after hours. More demands to do more with less. Fewer other humans to help you with those things.

We aren't working 4 hour days because we no longer have to spend half the day waiting on things that were slower pre-internet. We're just supposed to deliver more, and oh, work more hours too since now you've always got your work with you.

Any discussion of today's AI firms has to start from the position of these companies being controlled by people deeply rooted in, and invested in, those systems and the negative application of that technology towards "working for a living" to date.

How do we get from there to a utopia?

show 2 replies
Der_Einzigetoday at 1:57 AM

As an AI researcher who regularly attend NeurIPS, ICLR, ICML, AAAI (where I am shitposting from). The median AI researcher does not read science fiction, cyberpunk, etc. Most of them haven't read a proper book in over a decade.

Don't expect anyone building these systems to know what Bladerunner is, or "I have no mouth and I must scream" or any other great literature about the exact thing they are working on!

psunavy03yesterday at 8:15 PM

People can't even have a conversation about any kind of societal issues these days without pointing at the other political tribe and casting aspersions about "what they really meant" instead of engaging with what's actually being said.

Forgetting that if you really can hear a dogwhistle, you're also a dog.

welferkjtoday at 8:39 AM

Where we want to end up? Normies are still talking about the upcoming AI bubble pop in terms of tech basically reverting to 2022. It's wishful thinking all the way down.