A dispute between Anthropic, the developer of the Claude AI model, and the Pentagon has brought into the open a tension that military insiders have been grappling with quietly: the increasing scale and tempo of AI-generated targeting is creating powerful incentives to let computers do the actual firing.
As "Hvylya" reports, citing The Economist, Claude is used to some extent inside the Maven Smart System - the Pentagon's primary targeting platform - but not for geospatial tasks such as identifying objects on the ground. The dispute with the Pentagon remains, for now, a concern about hypothetical future applications rather than current use.
But the underlying pressure is real. In both the American and Israeli armed forces, humans still approve each target outside extreme circumstances, such as air-defense systems tasked with engaging swarms of incoming projectiles. Yet some insiders acknowledge that the sheer volume and speed of strikes have created incentives to grant computers greater latitude in firing on the targets they generate.
Within NATO, the concern has become explicit. "We're moving at a pace of change I wouldn't even have understood four years ago," said one person involved with the technology. Some member states are "worried about the loss of human control," this person added. The debate echoes broader questions about AI automation outpacing human oversight across multiple domains.
A European general familiar with both the American and Israeli systems noted a significant gap between the two allies. Israel, he said, "has given more autonomy to decision-support systems to generate targets than I would ever have been given." As AI-powered systems compress the targeting cycle from hours to minutes, the distance between generating a target and striking it continues to shrink - and with it, the window for the kind of ethical guardrails that Anthropic insists on preserving.
Also read: The Philosopher Teaching Claude Morality Before It Outsmarts Its Creators.
