logoalt Hacker News

qaidyesterday at 11:20 PM13 repliesview on HN

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War


Replies

xeonmcyesterday at 11:56 PM

    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...

nubgyesterday at 11:21 PM

I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.

show 6 replies
ghshephardyesterday at 11:43 PM

I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.

show 1 reply
TaupeRangeryesterday at 11:48 PM

What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

show 3 replies
skeledrewtoday at 12:35 AM

Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.

altpaddleyesterday at 11:28 PM

Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon

show 4 replies
yujzgzctoday at 12:23 AM

> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

orochimaaruyesterday at 11:35 PM

They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.

rafarktoday at 1:34 AM

I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph

urikaduritoday at 12:24 AM

The Ghandi of the corporate world is yet to be found

show 1 reply
nhinck2today at 1:27 AM

> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

mgraczyktoday at 12:40 AM

Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it

show 1 reply
jamesmcqtoday at 12:20 AM

So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.

show 3 replies