A decade of social media has fractured shared reality into competing tribal narratives, mainstreamed conspiracy theories, and elevated politicians skilled at exploiting a dumbed-down media environment. Academic philosopher Dan Williams has presented a sweeping argument that large language models could reverse much of this damage - not by censoring misinformation, but by quietly shifting the information environment back toward evidence and expertise.
As "Hvylya" details, based on Williams' essay on Conspicuous Cognition, the philosopher builds on Dylan Matthews' argument that LLMs are an "epistemically converging" technology, pushing "people's senses of reality closer together in a sort of mirror image of the way social media has fractured them." Where social media rewarded division and negativity, LLMs are built to deliver accurate, balanced, expert-aligned responses.
Williams describes the mechanism in concrete terms. Social media's "uniquely participatory nature," including rapid feedback through likes and reposts, made political punditry "much more performative and vulnerable to audience capture." The algorithms that recommend content, optimized for engagement, "often amplify sensationalist, alarming, and divisive messages." The result was not universal stupidity - because "audiences can shop around for information tailored to their intelligence, personalities, and biases" - but a media environment that gave the world both high-quality niche publications and "Candace Owens and Andrew Tate."
LLMs operate on fundamentally different incentives. They are not optimized for engagement but for accuracy and usefulness. They deliver information politely, without the gladiatorial character of social media debate. And they can walk individual users through complex evidence, addressing specific sources of skepticism - something no mass medium has ever been able to do.
Williams is blunt about the limits of his argument. He calls his thesis "highly speculative" and admits the strongest driver of his beliefs is "simply my extensive use of LLMs and what I have personally observed comparing the responses to alternative sources of information." He also warns that LLMs' converging tendency carries its own risk: expert opinion is sometimes wrong, and reducing epistemic diversity could be dangerous.
Still, he insists that too much of the current AI discourse is driven by "unreflective, omnicausal anti-AI sentiment" that obscures the technology's most consequential feature. "We can only face up to these problems if we recognise LLMs for what they are," Williams writes, "not a continuation of social media, but a powerful corrective to it."
Also read: Trump Bans Anthropic but Pentagon Can't Stop Using Claude in Iran Campaign
