logoalt Hacker News

The future of everything is lies, I guess: Where do we go from here?

589 pointsby aphyryesterday at 1:32 PM619 commentsview on HN

Comments

chungus_amongusyesterday at 7:37 PM

"carbon emissions" sneed

MrBuddyCasinoyesterday at 2:55 PM

The Industrial Revolution - the greatest thing ever to happen - required the British govt to deploy more troops against Luddites than they had fighting Napoleon at the same time.

Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.

At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.

show 2 replies
mcguireyesterday at 7:29 PM

Out of curiosity, what if the "can be useful" part is Gell-Mann Amnesia?

nipponeseyesterday at 2:19 PM

The conclusion was the takeaway. Everyone is getting bumped up a skill notch, not just bozo liars.

0xbadcafebeeyesterday at 8:18 PM

> Some of our possible futures are grim, but manageable. Others are downright terrifying, in which large numbers of people lose their homes, health, or lives. I don’t have a strong sense of what will happen, but the space of possible futures feels much broader in 2026 than it did in 2022, and most of those futures feel bad.

Well, yes, the entire world order is currently being upended. The USA is completely unrolling its place in the global order and becoming isolationist (and soon an authoritarian single-party state). The Petrodollar is either dying or being converted to a Northwestern-Hemisphere-Petrodollar, with the Yuan in the ascendancy (so there goes the strong economy powering VC money). China, EU, and Russia are the new global leaders. The Middle East and its oil is being taken over by Israel. Taiwan will fall to China and thus the whole technological world follows. Countries that are friendly with China will have good renewable tech, countries that aren't will be doubling down on oil and coal. Fresh water will become as valuable as oil. A world war will decimate global productivity for decades. Most of the democracies in the world will be gone by the end of the century.

But none of that has to do with AI.

Bad things will always happen in the world. Good things will happen too. But you're only focusing on the bad. That's not good for your health, or others'.

> Refuse to insult your readers: think your own thoughts and write your own words. Call out people who send you slop. Flag ML hazards at work and with friends. Stop paying for ChatGPT at home, and convince your company not to sign a deal for Gemini. Form or join a labor union, and push back against management demands that you adopt Copilot [..] Call your members of Congress and demand aggressive regulation which holds ML companies responsible [..] Advocate against tax breaks for ML datacenters. If you work at Anthropic, xAI, etc., you should think seriously about your role in making the future. To be frank, I think you should quit your job.

He's freaking out, and rejecting AI completely, out of fear. And that's okay; we all get a little freaked out sometimes. But please try not to make other people freaked out as well? Just because you are scared of something doesn't mean the fear is justified or realistic.

What's going to happen now is the same thing that happened during the pandemic. A bunch of irrationally fearful people will decide that the only way they can cope with their fear, is to reject the basis of it. COVID deniers and anti-maskers/anti-vaxxers were essentially so terrified of the loss of control they had, that they refused to acknowledge it. They instead went full-bore in the opposite direction, defying government mandates and health warnings, in order to try to regain some semblance of control over their lives. And it did not go well.

That's what's now gonna happen with AI deniers. They're so freaked out about AI that they're going to reject it en-masse, not because it is actually doing anything to them, but because they're afraid it might. And the end result is going to be similar: extreme people do extreme things, and the end result isn't good. So please try to reign in the doomerism a bit, for all our sakes.

BLACKCRABtoday at 3:17 AM

[dead]

benh2477today at 2:53 AM

[dead]

peacemosaictoday at 12:25 AM

[dead]

richardkielbasayesterday at 9:28 PM

[dead]

SilverBirchyesterday at 2:19 PM

Frankly I think it’s kind of childish to just put up a massive Uk wide block on your website. “Call your representatives”, ok dude, can I give you a list of things I want to change about your country’s policies?

show 3 replies
chungusamongusyesterday at 2:14 PM

Complaining about AI slop is starting to become its own kind of slop. There isn't anything novel in this little essay. It might as well have been written by AI because I've seen this type of dude complain about this exact type of thing countless times at this point, and none of them have a solution other than empty moralizing or call your representative or whatever. None of that’s going to work. Fortune, Gizmodo, The Verge,Ars Technica, etc. all circulate the same negative headlines and none of them have a solution, and their writers are probably going to be totally replaced by AI so what difference does it make? They're just capitalizing on the negative sentiment and they have no intention to come up with a solution. At that point it's just complaining and I'm sick of it.

show 3 replies
yanis_tyesterday at 2:33 PM

I read couple of articles in the series and I still couldn't get what was the point author is trying to make. Reads like, "let me give you 100 arguments why I think this is bad".

Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?

show 2 replies
magic_hamstertoday at 4:39 AM

These are thoughts of someone who's very good at putting words together, but sadly has little experience with the subject matter.

> I’ve thought about this a lot over the last few years, and I think the best response is to stop.

This is exactly where it shows.

LLMs, agents and whatever comes next are not only the future of tech, but they are going to be national resilience drivers for the countries that will be able to support them with power, water and science.

Who is supposed to stop? The US? China? Russia? Everyone? Of course this won't happen. This is an arms race.

But even if it weren't, stopping is the wrong answer. You don't have to outsource your thinking, writing or reading. How you use LLMs is entirely up to you.

There is a way to use LLMs which is beneficial. I treat them as a private tutor available to me for questions. This solved a lot of friction I had with my relationship with LLMs.

More telling is that the author mainly thinks about their relationship with LLMs while in reality the space has moved on to automation with agents. You don't interact with LLMs as much as before, and if you still do, then soon you won't.

Ahents are not really ML. It's harnesses and parsing and memory and metrics. It's software. Should we stop this as well?

Ifkaluvayesterday at 3:08 PM

I don’t think this is the right take.

To take the car analogy: it matters how we use the car.

The car in itself can be used to save time and energy that would otherwise be used to walk to places. That extra time and energy can be used well, or poorly.

- It can be squandered by having a longer commute that defeats the point

- Alternatively, it can be wasted by sitting on a couch consuming Netflix or TikTok

- Alternatively, it can be used productively, by playing team sports with friends, or chasing your kids through the park, or building a chicken coop in your back yard

It’s all about wise usage. Yes it can be used as a way to destroy your own body and waste your time and attention, but also it can be used as a tool to deploy your resources better, for example in physical activities that are fun and social rather than required drudgery.

I think it’s the same for LLMs. Managers and executives have always delegated the engineering work, and even researching and writing reports. It matters whether we find places to continue to challenge and deploy our cognition, or completely settle back, delegate everything to the LLM and scroll TikTok while it works.

show 4 replies