Sam Altman is walking a tightrope that his rival Anthropic fell off this week. OpenAI has agreed to supply AI to the Pentagon while insisting on the same ethical limits that got Anthropic banned from government contracts. The question is whether Altman can stay balanced where Anthropic could not — or whether the same fate eventually awaits him.
Anthropic had been one of the most prominent advocates for careful, constrained AI development since its founding. Its refusal to allow Claude to be used for autonomous weapons or mass surveillance was consistent with that mission. The Pentagon’s insistence that these restrictions be removed, and Anthropic’s refusal to comply, set the stage for a confrontation that the administration was clearly prepared for.
President Trump’s directive ordering the immediate cessation of all federal use of Anthropic technology was swift and comprehensive. His public language was designed to frame Anthropic not as a principled company but as a politically motivated one — a framing that aligned conveniently with the administration’s broader approach to tech regulation.
Altman announced the Pentagon deal hours later, framing it as a principled agreement rather than a capitulation. He stated that mass surveillance and autonomous weapons are OpenAI’s own hard limits, and that these commitments are formalized in the contract. He also called for the government to offer these terms universally — implicitly defending Anthropic’s position while accepting the contract Anthropic lost.
Whether this tightrope walk is sustainable depends on the administration’s patience with OpenAI’s conditions. Hundreds of AI workers who signed solidarity letters with Anthropic are now watching to see if their employer’s promises hold. Anthropic, meanwhile, maintains that its restrictions have never prevented a single lawful government operation — a claim that, if true, makes the entire crisis seem manufactured.