During negotiations with the Pentagon, a proposal emerged that could have bridged the impasse over autonomous weapons: simply keep Anthropic's AI in the cloud and out of the weapons themselves. The models would synthesize intelligence before operations but would not make kill decisions. Anthropic quickly dismissed the idea, The Atlantic has reported - and then OpenAI embraced a strikingly similar arrangement hours later.

As reported by "Hvylya", citing The Atlantic's source familiar with the negotiations, Anthropic reasoned that in modern military AI architectures, "the distinction between the cloud and the edge is no longer all that defined." The company concluded it was "less a wall and more of a gradient."

The logic was straightforward. Drones on the battlefield can now be orchestrated through mesh networks that include cloud data centers. And although they are designed to survive autonomously, the military's impulse will always be to maintain maximum connectivity between drones and the most powerful models in the cloud. The better the connection, the more intelligent the machine - and the harder it becomes to claim the AI is not involved in lethal decisions.

The Pentagon itself has been working to blur this boundary. Part of the goal of its Joint Warfighting Cloud Capability is to push computing resources closer to the fight. As The Atlantic's source put it, the AI may sit in an Amazon Web Services server in Virginia rather than a war zone overseas, but "if it's making battlefield decisions, from an ethical standpoint, that's a distinction without much difference." According to the source, "it didn't take much analysis" for Anthropic to discard the cloud provision.

Yet this was precisely the arrangement that OpenAI found compelling. Just hours after Anthropic's deal collapsed, OpenAI announced its own agreement with the Pentagon. The company released a statement touting the fact that its AI would be deployed "only in the cloud." CEO Sam Altman, who earlier in the week had expressed solidarity with Anthropic's position on autonomous weapons, did not respond to The Atlantic's request for comment.

As of Saturday afternoon, nearly 100 OpenAI employees had signed an open letter supporting the same red lines as Anthropic on mass domestic surveillance and autonomous weapons. If Altman finds himself face-to-face with them in the office on Monday, The Atlantic notes, "he may have to explain why this idea that Anthropic quickly dismissed out of hand proved so compelling to him."

Also read: "We Will Destroy Your Business": Trump's Threat to Anthropic Sent a Signal Across Silicon Valley