logoalt Hacker News

remarkEonyesterday at 6:16 PM4 repliesview on HN

Agree, and I think the labeling of them (Anthropic) a supply chain risk was handled poorly and will likely be reverted over time. That being said, I would be nervous if I was in the Pentagon and depended on Anthropic tooling for something, even if that something was unrelated to kinetic operations. How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like? If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.

Maybe the argument is that they should, but I don't agree with that. If Anthropic or any of these other vendors have reservations about the logical conclusion of how these tools will be/are used then they should not sell to the government. Simple as. However ... if the claims Anthropic et al make about how these systems will develop and the capabilities they will have are at all true, then the government will come knocking anyway.


Replies

BoxFouryesterday at 6:26 PM

> the government will come knocking anyway.

Dario has even said something along these lines at one point: As the technology matures, it’s very possible the government either nationalizes or semi-nationalizes companies like Anthropic.

That doesn’t seem out of the realm of possibility if they can’t land on a relationship similar to existing defense contractors like Raytheon, where these kinds of discussions obviously don't seem to happen.

show 1 reply
lejalvtoday at 12:19 AM

> If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.

I can't agree that this is the right comparison. What is being sold here is not just another missile or tank type, it is the very agency and responsibility over life and death. It's potentially the firing of thousands of missiles.

spacemanspiff01yesterday at 6:38 PM

> How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like?

I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.

The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.

show 1 reply
btownyesterday at 6:45 PM

A counter-argument here: if a private company knows that its technology may be used for human-not-in-loop targeting/surveillance, and knows that its technology is not yet ready to fulfill that use case without meaningful unintended casualties... does that company have an ethical obligation to contractually delineate its inability to offer that service?

In a version of a trolley problem where you're on a track that will kill innocent people, and you have the opportunity to set up a contract that effectively moves a switch to a track without anyone on it, is it not imperative to flip that switch?

(One might argue that increased reaction times might save service members' lives - but the whole point is that if the autonomous targeting is incorrect, it may just as well lead to increased violence and service member casualties in the aggregate.)

And we're not talking about the ethics board manipulating individual token outputs subtly, which would indeed be a supply chain risk - we're talking about a contractual relationship in which, if a supplier detects use outside of the scope of an agreed contract, it has the contractual right to not provide the service for that novel use, while maintaining support for prior use cases.

The fact that the government would use the threat of supply chain risk to enforce a better contract is unprecedented, and it deteriorates the government's standing as a reliable counterparty in general.

show 1 reply