The Chinese military is building an arsenal of AI-powered cognitive warfare tools - including deepfake generators, social media manipulation systems, and perception warfare technologies - designed to distort adversaries' understanding of reality during a conflict. Multiple PLA procurement documents explicitly request deepfake capabilities, according to a Georgetown University study.
Researchers Sam Bresnick, Emelia S. Probasco, and Cole McFaul analyzed thousands of PLA procurement requests for Georgetown's CSET, publishing their findings in Foreign Affairs. The military views AI-generated images, video, and audio as "potent tools for influencing public opinion and manipulating adversaries' perceptions," "Hvylya" reports.
The ambitions extend well beyond deepfakes. The PLA is developing AI systems to "identify foreign populations' political views, predict social unrest, and manipulate adversaries' cognition and decision-making." This represents a significant divergence from U.S. military AI programs, which focus primarily on planning, targeting, and battlefield awareness rather than population-level psychological operations.
The information warfare push also has a defensive component. The PLA is building AI tools to automate intrusion detection on its own networks, strengthen military communications resilience, and bolster its cyber-operations. Officers and soldiers are already using AI to simulate virtual battlefields and model competitor behavior during training exercises.
The researchers warn such capabilities could trigger dangerous feedback loops. Some PLA decision-making systems rely on open-source data, and adversaries may be "incentivized to manipulate the information environment" - flooding social media with false signals or disrupting commercial satellite imagery to degrade each other's AI tools. PLA personnel, meanwhile, are already rehearsing future conflicts in AI-powered virtual environments.
Also read: Twice the Air Power of Shock and Awe - With a Fraction of the Oversight.
