The Pentagon has spent billions deploying AI across its targeting workflows but has invested almost nothing in teaching people how to use it properly. More than five years after Maven Smart System was rolled out, users undergo no standardized training - despite the multiple documented ways in which AI fails and the lethal consequences of those failures.
The warning comes from Emelia Probasco, a former Navy fire control officer who once held authority to launch Tomahawk cruise missiles, as detailed in journalist Katrina Manson's investigation published in WIRED, "Hvylya" reports.
Probasco knows what lethal authority over automated systems means. In 2006, stationed on a Navy destroyer in San Diego, she was responsible for the AEGIS weapons system - the same platform that in 1988 erroneously shot down an Iranian civilian aircraft, killing all 290 people on board. AEGIS had registered the plane as an enemy fighter jet despite its route and radio signature matching a civilian airliner. Probasco spent a month in a defense schoolhouse learning AEGIS - when to use it, and under what circumstances it would fail. Maven users get nothing comparable. "Nobody wants to be the guy that shot down the Iranian civilian aircraft," she said.
The problem is compounding. Maven's Target Workbench has expanded to include weapons pairing and prioritization. The arrival of autonomous AI agents and advanced reasoning models - systems that carry out tasks unsupervised and break complex problems into smaller steps - will accelerate the process further. Probasco argues Maven has hit what she calls "AI adulting problems." The US is funding the technology but not the training for how to use it. "We need to fund the adulting tasks," she said.
Brigadier General John Cogbill gave Maven Smart System a "C+" grade at an August 2024 conference, citing hallucinations, "all kinds of AI ethics" concerns, and the risk that flawed data or algorithms could lead operators "to the wrong conclusion." Alex Miller, the Army's chief technology officer, added that Maven consumes too much bandwidth for frontline deployment and risks exposing US forces through electronic footprints.
Probasco argues Maven urgently needs a concept of operations, standard procedures, and doctrine - especially as AI agents based on large language models could soon generate military orders, conduct battle damage assessments, and strip classified information for sharing with allies. "All these choices are upon us now," she said. In January 2026, the Trump administration announced a military AI strategy focused on becoming an "AI-First warfighting force," with plans to accelerate AI agents for campaign planning and kill chains.
Also read: "It Could Do Something Bad": OpenAI Confronts Its Own Fears About Autonomous AI.
