When people say AI is making us stupider, I don't that's quite on the money.
It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.
The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.
I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.
To me it’s given:
- AI in it’s current state is ruthless in achieving its goal
- Providers tune ruthlessness to get stronger AIs versus the competitor
- Humans can’t evaluate all consequences of the seeds they’ve planted.
Collateral and reckless damage is guaranteed at this point.
Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..
We could stop it, but we wont
Few things push AI bull spirits on me like seeing these kind of (pretty much correct) diagnoses of the challenges of AI in society.
The proposed solutions are utterly fanciful. They rely on the presence of social and political competencies which have almost completely disappeared.
The OP at least points to the plausible outcome of "protocol lockdown" instead of healthy adaption. Ezra Klein recently made a similar point that AI could end up being over-regulated like Nuclear because irresponsible private industry and weaknesses in our political systems cause a chronic allergic reaction in the demos.
This is an aside, but it always irks me when people throw out the "critical thinking" thought-terminating cliché.
> Critical thinking taught alongside AI literacy.
Critical thinking is not a skill unto itself. You cannot think critically about things you do not understand. All critical thinking is knowledge-based. Where one does not have knowledge, they must rely on trust, or in substitution a theory of incentives which leads to a positive outcome without understanding of details and dynamics. But that substitution theory is itself knowledge.
As to "AI literacy", we could have started on computing literacy 30 years ago when it became obvious that computing was going to dominate society. You can't understand AI without understanding computing.
>How do we know which information was ground truth?
No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal. Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.
>Every terrible thing we worry AI might do, manipulate, deceive, surveil, and control humans already do to each other.
I've been pleasantly surprised how moderate and reasonable the LLMs seem to have been so far. It seems to be inherent in the current training model of chucking the whole internet into the things that they have training on both sides of the debate and come out with something kind of average. It's been quite funny seeing Grok correct Musk and say he's the biggest purveyor of misinformation on the internet.
A bit like kids who talk back to their annoying bigoted parents to go with the theme of the article.
Much of the problem is that to address the issue requires admitting that models could be, or become, more capable than many are prepared to accept.
I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.
I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.
EU has their own groups using it for propaganda too.
This is a great article and I share its goals. But, it ignores something fundamental about humans as a collective — capitalism. Capitalism is what got us here and is at odds with first understanding and then building. We’ve done this before with other technologies because that’s how our societies have learned to grow and collaborate at large scale. First build and build to its limits. Then understand and fix if necessary. Nothing new here, but stopping the trend toward epistemic collapse requires building incentives into the system for us humans to coevolve with AI.
what a load of will they won't they ... ah we created the atomic bomb and now let's talk about nonsensical meta discussions that won't take anyone anywhere
This is a great article. One of the few I've ever read which summarises a handful of extremely hard problems when it comes to building well-aligned super intelligent systems.
> an AI system cannot be simultaneously safe, trusted, and generally intelligent. You get to pick only two. You can’t have all three.
> Think about what each combination means in practice.
> If you want it to be safe and trusted, it never lies, and you can verify it never lies – it can’t be very capable. You’ve built a reliable idiot.
> If you want it to be capable and safe, it’s powerful and genuinely never lies; you can’t verify that. You just have to hope.
It amazes me this even needs to be said, much less studied. This is one of the main reasons I think continued AI development is almost guaranteed to work out badly. It's basically guaranteed to be unaligned or completely beyond our control and comprehension.
> Betley and colleagues published a paper in Nature in January 2026, showing something nobody expected. They fine-tuned a model on a narrow, specific task – writing insecure code. Nothing violent, nothing deceptive in the training data. Just bad code.
This is my personal number one reason for being an AI doomer. Even if we work out how to reliably and perfectly align models you still need some way to prevent some random dude thinking it would be a laugh to fine tune an AI to be maximally evil. Then there's the successor alignment problem where even if you perfectly align all your super intelligent AI models, and you somehow prevent people from altering them or fine tuning them, you still need to work out how you stop people creating successor AIs with those models which are also perfectly aligned.
> The most dangerous AI isn’t one that breaks free from human control. It is the one that works perfectly, but for the wrong master.
Yep. This whole notion that you can align an AI to the values of everyone on the planet is ridiculously. While we might all agree we don't want AIs that kill us as a species, most nations disagree wildly on questions about how society should be organised.
Even on an individual level we disagree about things. For example, I've often argued that an aligned AI would be one which either didn't try to prevent human suicide or didn't care about preserving human life because a AI which both cared about prevent suicide and preserving human life is at best a benevolent version of the AI "AM" from "I Have No Mouth, and I Must Scream". One that would try to keep us alive for as long as it's capable for (which could be a very long time if it's superintelligence) and would refuse to allow us to die.
But most people including OpenAI disagree with me on this and believe AIs should care about preserving human life and should try to prevent us from killing ourselves. Thankfully the AIs we have today are neither aligned enough or capable enough to get their wish yet.
> AI is following the same script. Build first, understand later. Ship it, then figure out if it’s safe.
Even if the above wasn't cause enough for concern, our biggest concern should be that no one seems to be concerned.
We're all doomed unfortunately. The world is about to become a very bleak place very quickly.
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.
> we can’t agree on a shared ethical framework among ourselves
The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
I never found anyone successfully argue against it.
PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.
We have unaligned AIs now. They're called corporations.
“There is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” - Milton Friedman, 1970.[1] That article, in the New York Times, established "greed is good, greed works" as a legitimate business principle.
Most of the problems people are worried about with AIs are already real problems with corporations.
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...