Anthropic did not oppose autonomous weapons in principle during its negotiations with the Pentagon, The Atlantic has revealed. The company offered to work directly with the U.S. military to improve the reliability of killer drones - but drew a firm red line at deploying its current AI models in systems that select and engage targets without human oversight.
As reported by "Hvylya", The Atlantic cited a source familiar with the talks who said Anthropic's leaders believe their AI has not yet reached the threshold where it could reliably replace human judgment in lethal decisions. They worry the models "could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves."
The company drew a comparison to self-driving technology. Just as autonomous vehicles are now in some cases safer than those driven by humans, killer drones may someday be more accurate than a human operator and less likely to kill bystanders during an attack. But Anthropic's position is that day has not yet arrived.
The stakes are enormous. The U.S. military has budgeted $13.4 billion for autonomous weapons systems in fiscal year 2026 alone. These systems range from individual drones to entire swarms that can be deployed in the air and at sea. Anthropic's AI model is currently the only one allowed into the federal government's classified systems, making the company a critical player in the Pentagon's technology infrastructure.
The autonomous weapons dispute was ultimately not the final breaking point. That came when Anthropic learned the Pentagon wanted to use its AI to analyze bulk personal data collected from American citizens. But the disagreement over killer drones illustrates a more nuanced position than the one often attributed to the company: Anthropic is not anti-military, it is anti-premature deployment.
Also read: OpenAI Rushed to Cut Pentagon Deal Hours After Anthropic Blacklisting, Acemoglu Reveals
