The wave of alarm about AI-powered disinformation - deepfakes that would destroy trust, propaganda bots that would manipulate elections, fabricated media indistinguishable from reality - has largely failed to materialize in the way experts predicted. Academic philosopher Dan Williams has mounted a systematic case that "almost all of the recent alarmism and catastrophising about deepfakes and AI-based disinformation has largely proven to be unfounded."

"Hvylya" draws attention to Williams' analysis on Conspicuous Cognition, where the philosopher dismantles several assumptions underlying the AI disinformation panic. The first is that disinformation itself is a significant force shaping attitudes. Williams argues it simply is not: "People have sophisticated cognitive defences against manipulation and deception, and the reputational risks of spreading AI-based falsehoods and fabrications are strong enough to discourage most influential figures and media outlets from doing so."

The real-world effects of AI misinformation are often counterintuitive, Williams notes. Many speculate that deepfakes will cause people to lose all trust in recordings. But an equally likely outcome is that people will "restrict their trust to recordings verified by established media outlets and other information sources that have built up a reputation for trustworthiness." There is tentative evidence for this: studies show people place greater value on outlets they deem credible when the existence of AI-generated misinformation is made salient to them.

Williams also points to evidence that AI tools are already being used to fight misinformation rather than spread it. People frequently ask Grok on X to fact-check posts from politicians and pundits, "including information from politicians and pundits on their own side," suggesting genuine curiosity rather than partisan motivation. Research also shows that use of Grok to flag content as false "slightly raises the likelihood that posters will remove the information from the platform."

The disinformation panic, Williams argues, is part of a broader pattern of "unreflective, omnicausal anti-AI sentiment" that throws every possible complaint at the technology - "climate change, water use, hallucinations, bias, misinformation, jobs, existential risk" - with very little concern for accuracy or proportion. This isn't helpful, he insists. When it comes to LLMs' effects on the information environment, the most likely impact is simply that they greatly improve people's access to expert-level information.

Also read: Anthropic Offered to Help Improve Killer Drones: Where the Company Drew Its Red Line