WaPo is reporting that OpenAI and xAI already agreed to the Pentagon's "any lawful use" clause, aka, mass surveillance and fully autonomous killbots. From the WaPo article https://archive.is/yz6JA#selection-435.42-435.355
> Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks.
The only difference is simply that Anthropic is already approved for use on classified networks, whereas Grok and OpenAI are not yet (but are being fast-tracked for approval, especially Grok). Edit: Note someone below pointed out that OpenAI may be approved for Secret level, so it's odd that Washington Post reports that they are working on it still.
> fully autonomous killbots
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
OpenAI is usable through Azure for Government up to IL-6.
https://devblogs.microsoft.com/azuregov/azure-openai-authori...
Either Anthropic is seen as the clear leader (it certainly is for coding agents) or this is a political stunt to stamp out any opposition to the administration. Or both.