logoalt Hacker News

baschyesterday at 7:13 PM3 repliesview on HN

Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.


Replies

throw0101ayesterday at 7:43 PM

> Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.

There are two possibilities:

> The government would likely argue that dropping the contractual restrictions doesn't change the product. Claude is the same model with the same weights and the same capabilities—the government just wants different contractual terms. […] Anthropic would likely argue the opposite: that its usage restrictions are part of what Claude is as a commercial service, and that Claude-without-guardrails is a product it doesn't offer to anyone. On this view, the government is asking for a new product, and the statute doesn't clearly authorize that.

and

> The more extreme possibility would be the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model's training, not merely modify the access terms. Here the characterization question seems easier: a retrained model looks much more like a new product than dropping contractual restrictions does. Admittedly, the government has a textual argument in its favor: the DPA's definitions of "services" include “development … of a critical critical technology item,” and the government could frame retraining Claude as exactly that. Whether courts would accept that framing, especially in light of the major questions doctrine, is another matter.

* https://www.lawfaremedia.org/article/what-the-defense-produc...

* https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

A more extreme situation: could the DPA be used to nationalize the model so the government has ownership, and then allow access to more amenable AI players?

show 1 reply
foogaziyesterday at 8:46 PM

> It isn't in private.

We don’t know this

EA-3167yesterday at 7:53 PM

The top line of the article gives a big old hint: Anthropic signed a contract with the “Killing people” part of the government and now they’re putting on a show. No contract, no leverage.

The only threat the Pentagon has is to terminate the contract.

show 1 reply