logoalt Hacker News

We do not think Anthropic should be designated as a supply chain risk

778 pointsby golferlast Saturday at 9:24 PM421 commentsview on HN

Comments

mcs5280yesterday at 4:58 AM

Oh look, another episode of Sam Altman lies about everything in an attempt to make people like him

mooglylast Saturday at 11:28 PM

Looks like losing subscribers actually does work. Definitely gets a damage control response, at least.

show 2 replies
sourcecodeplzyesterday at 10:25 AM

Anthropic is just virtue signaling, they will also fold, but just a little later...

gavin_geeyesterday at 7:37 PM

sorry but I dont think a private company should dictate country policy as set by elected leaders.

who the hell do you think you are virtue signalling your opinion on the world

engineer_22yesterday at 4:26 AM

They want it to sound like they're allies while they slit their throat

teyopiyesterday at 1:19 AM

Can we stop posting x links?

https://xcancel.com/OpenAI/status/2027846016423321831

show 3 replies
IAmGraydonyesterday at 1:48 PM

Let’s all remember that this is the guy who bought up the world’s RAM supply in wafer form (which OAI can’t use) to remove it from the market and drive up prices for competitors and you and I. He is the worst of the worst.

ta9000yesterday at 1:13 AM

Everyone knows this is just about Trump funneling money to the Ellisons (Oracle) via OpenAI. It really is that simple. This is all just pretext.

csto12last Saturday at 11:48 PM

Wow, so brave after accepting the contract. This is more insulting than OpenAI saying they are a supply chain risk.

emsignyesterday at 10:16 AM

Bye bye OpenAI

mihaalyyesterday at 12:56 PM

Nice try.

rdiddlyyesterday at 12:37 AM

Us bribing them: fine

Us taking the contract, working for them and enabling them: fine

It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it

Anthropic being blacklisted: whoa there, we have ethics!

Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo

show 1 reply
AmericanOPlast Saturday at 11:37 PM

I do think OpenAI's brand is dumpstered.

show 3 replies
angry_octetyesterday at 7:07 AM

Et tu, Brute?

jchookyesterday at 1:59 AM

Fool me once...

bmitcyesterday at 6:21 AM

Quit referring to it as the department of war. It's the Department of Defense.

show 1 reply
throwawayaghas1yesterday at 8:05 AM

I don't believe this one bit. Altman and Trump have been in bed together since the inauguration.

throwawayaghas1yesterday at 8:05 AM

I don't believe this one bit. Altman and Trump have been in bed for as long as his inauguration.

throwaway314155yesterday at 3:14 AM

Can someone please explain plainly what this means and what happened, and why it is the source of so much controversy?

I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.

show 1 reply
hmokiguessyesterday at 1:48 AM

Now that’s something. Another campaign advertising. Wow

restersyesterday at 1:00 AM

In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.

The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.

This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.

It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.

Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!

Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).

Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.

This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.

Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.

Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.

imiricyesterday at 12:59 PM

The layers of stupidity on this shit cake are staggering. I don't even know where to start...

Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.

abhitrilokiyesterday at 7:16 AM

[flagged]

show 4 replies
dev1ycanyesterday at 1:57 AM

Pathetic attempt at damage control, lol.

chmorgan_yesterday at 4:09 PM

[dead]

Helloyelloyesterday at 6:06 PM

[dead]

xorgunyesterday at 2:34 PM

[dead]

jwpapiyesterday at 1:47 AM

No wonder they think they’re close to AGI when they think we are that stupid.

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.

show 3 replies
lenny321yesterday at 5:02 AM

[dead]

builderhq_ioyesterday at 3:00 PM

[dead]

catchcatchcatchyesterday at 1:25 PM

[dead]

proshnoyesterday at 9:41 AM

[dead]

Helloyelloyesterday at 1:36 AM

[dead]

bishop_cobbyesterday at 2:12 AM

[dead]

roughlyyesterday at 12:25 AM

It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.

show 3 replies
o175yesterday at 1:05 PM

Everyone's applauding Anthropic for having principles. Let's look at what those principles actually do.

Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.

But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.

Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.

And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.

show 1 reply