logoalt Hacker News

tedsandersyesterday at 6:27 AM71 repliesview on HN

I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.


Replies

baconneryesterday at 6:43 AM

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

show 11 replies
tfehringyesterday at 7:40 AM

(Disclosure, I'm a former OpenAI employee and current shareholder.)

I have two qualms with this deal.

First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.

Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.

Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.

[0] https://x.com/sama/status/2027578652477821175

[1] https://x.com/UnderSecretaryF/status/2027594072811098230

show 1 reply
ChadNauseamyesterday at 6:47 AM

Did Sam Altman say that he wouldn't allow ChatGPT to be used for fully autonomous weapons? (Not quite the same as "human responsibility for use of force".)

I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.

But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.

show 3 replies
retsibsiyesterday at 1:35 PM

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons,

In that case, what on earth just happened?

The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.

Do you not see something very, very wrong with this picture?

At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?

> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)

If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.

show 1 reply
throwawaywd89eyesterday at 7:01 AM

"AI shouldn't be used for mass surveillance or autonomous weapons". The statement from OpenAI virtually guarantees that the intention is to use it for mass surveillance and autonomous weapons. If this wasn't the intention them the qualifier "domestic" wouldn't be used, and they would be talking about "human in the loop" control of autonomous weapons, not "human responsibility" which just means there's someone willing to stand up and say, "yep I take responsibility for the autonomous weapon systems actions", which lets be honest is the thinnest of thin safety guarantees.

_heimdallyesterday at 11:53 AM

My understanding is that OpenAI's deal, and the deal others are signing, implicitly prevents the use of LLMs for mass domestic surveillance and fully autonomous weapons because today one care argue those aren't legal and the deal is a blanket for allowing all lawful use.

Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.

Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.

mattalexyesterday at 8:02 AM

Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?

The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.

It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.

pear01yesterday at 6:34 AM

Why would you believe that? If that were the case what was the issue with Anthropic even about?

You, and your colleagues, should resign.

show 3 replies
jacquesmyesterday at 9:47 PM

> I don't see why I should quit.

So, can you please draw the line when you will quit?

- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies

Correct?

If so, then if I take your words at face value:

- By your reading non-domestic mass surveillance is fine

- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved

- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.

I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.

scarmigyesterday at 7:27 AM

Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?

rancar2yesterday at 1:49 PM

The founders are all on a first name basis. I’m surprised no one has noted that Anthropic and OpenAI winning together by giving the world two different choices, just like the US does in its political landscape. In this circumstance, OpenAI wins the local market for its government and aligned entities (while having the free consumer by a matter of cost dynamic for that ideal customer profile which is vary broad and similar to Google’s search audience where most their revenue still depends), while Anthropic is provided the global market and prosumer market where people can afford choice by paying for it.

show 1 reply
chasd00yesterday at 2:54 PM

#1 weekend HN is not a sane place. #2 emotions are high. #3 for what it’s worth @tedsanders I understand where you’re coming from and I believe you’re making the right choice by staying or at least waiting to make a decision. Don’t let #1 and #2 hurt you emotionally or force you to make a rash decision you later regret.

Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.

phs318uyesterday at 6:42 AM

Thank you for responding. Everyone wants to think they will “do the right thing” when their own personal Rubicon is challenged. In practice, so many factors are at play, not least of which are the other people you may be responsible for. The calculus of balancing those differing imperatives is only straightforward for those that have never faced this squarely. I’ve been marched out of jobs twice for standing up for what I believed to be right at the time. Am still literally blacklisted (much to the surprise of various recruiters) at a major bank here 8 years after the fact. I can’t imagine that the threat of being blacklisted from a whole raft of companies contracting with a known vindictive regime would make the decision easier.

andsoitisyesterday at 4:16 PM

Ted, what do you think of your CEO’s statement: “the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”

The evidence seems to overwhelmingly point in the opposite direction.

syllogismyesterday at 10:17 AM

You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.

It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.

If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.

It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?

What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?

latexryesterday at 8:31 AM

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons

And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?

> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.

So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?

I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.

Know that when things go wrong (not if, when), the blood will be on your hands too.

segmondyyesterday at 7:40 AM

You can't be this naive?

roflburgeryesterday at 1:12 PM

Don't piss on my leg and tell me it's raining.

mdayesterday at 10:10 AM

I can totally see why you should quit, but we see different things apparently.

fluidcruftyesterday at 2:50 PM

What people don't understand is that domestic surveillance by the government doesn't happen and isn't needed. They know it's illegal and unpopular and for over two decades they have a loophole. Since the Bush administration it's been arranged for private contractors to do the domestic surveillance on the government's behalf. Entire industries have been built around creating "business records" for no other purpose than to sell them to the government to support domestic surveillance. This is entirely legal and why the DoW has been able to get away with saying things like "domestic surveillance is illegal, we don't do that" for over two decades while simultaneously throwing a shit fit about needing "all legal uses" if their access to domestic surveillance is threatened.

There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)

Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.

Qiu_Zhanxuanyesterday at 5:31 PM

You're paid to look the other way. At least, own it.

germandiagoyesterday at 4:56 PM

To me it looks weird that a replacement won't accept Dept of War terms. This was the source of the dispute so...

I do not know but I would not very optimistic about those new terms.

virtualritzyesterday at 9:48 AM

Giving you the benefit of the doubt and assuming [1] does not play a role in your thinking:

I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.

[1] https://news.ycombinator.com/item?id=47189650#47189970

Griffinsauceyesterday at 7:53 AM

Aside from that unlikely read, this deal was still used as a pressure point on Anthropic, there's absolutely no way OpenAI was not used as a stick to hit with during negotiations.

What is your red line?

assimpleaspossiyesterday at 12:19 PM

How would OpenAI respond to China or Russia using OpenAI--or any AI--for mass surveillance or autonomous weapons?

motbus3yesterday at 4:37 PM

These sort of agreements are easily bypassable, especially on such tools.

Someone might just create a spawn of openai with a tag and do all the stuff there...

There is no much guarantee I think

kaashifyesterday at 7:07 AM

Anthropic is deemed a betrayer and a supply chain risk for actually enforcing their principles.

OpenAI agrees to be put in the same position as Anthropic.

It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?

There's surely no way that's actually what you believe...

datsci_est_2015yesterday at 2:20 PM

Please read about the imperial boomerang https://en.wikipedia.org/wiki/Imperial_boomerang

trvzyesterday at 7:27 AM

You may have missed that no single word said or written by any of the current US government’s members can be believed.

curiousgalyesterday at 8:52 AM

This is not meant as a personal attack but this has got to be the most naive thing I've read.

ryan_nyesterday at 12:19 PM

For the record I don’t care if you quit or not. Cash rules after all… However, you are incredibly naive if you think the current admin will follow through on those terms.

nullocatoryesterday at 7:42 AM

I don't know you, so maybe you're actually for real and speaking on good faith here but honestly this and your other responses in this thread read exactly like "...salary depends on not understanding"

sensanatyyesterday at 11:11 AM

Assuming this isn't a troll and you really think this, you should at least have the cojones to admit you're taking the blood money instead of trying to pretzel the truth so hard that you just look like a moron instead.

mpalmeryesterday at 2:39 PM

Looks to me like you have decided that you are being paid to shut up and take the word of the most thoroughly dishonest and corrupt US government we've yet seen. Why on God's slowly-browning green earth do you trust that Altman got the deal Anthropic was trying for?

q3kyesterday at 8:41 AM

Coward.

show 1 reply
dannyfreemanyesterday at 1:32 PM

Your work will be used to power an auto aim kill bot. I personally couldn't live with that.

show 1 reply
4b11b4yesterday at 12:51 PM

lol, naive as hell. why would your company's agreement be the same as the one who just refused the _same_ agreement? Even my question doesn't even make sense, this is a contradiction, therefore your statement must be false. There, it's proven

vimdayesterday at 9:13 AM

"domestic" "mass" surveillance, two words that can be stretched so thin they basically invalidate the whole term. Mass surveillance on other countries? Guess that's fine. Surveillance on just a couple of cities that happen to be resisting the regime? Well, it's not _mass_ surveillance, just a couple of cities!

Nekorosuyesterday at 9:42 AM

I won't trust a word coming from Sam Altman's mouth until I see official signed documents (which I won't).

show 1 reply
bambaxyesterday at 9:34 AM

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

retornamyesterday at 6:58 AM

I have a bridge to Brooklyn to sell you if you believe this.

Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.

I can't tell you what to do but I hope you make the right decision.

leptonsyesterday at 9:44 AM

>OpenAI deal disallows domestic mass surveillance

And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.

You're fooling yourself if you think this administration is following any kind of rule.

mmanfrinyesterday at 7:46 AM

You can make blood money but you have to be aware it's blood money. Don't delude yourself in to thinking you work for an ethical or moral company.

mathisfun123yesterday at 6:32 AM

> Given this understanding, I don't see why I should quit.

https://en.wikipedia.org/wiki/Motivated_reasoning

🔗 View 21 more replies