The populist wave that swept Western democracies over the past decade owed much of its force to social media - a technology that bypassed elite gatekeepers and amplified perspectives the establishment had long suppressed. Now, according to academic philosopher Dan Williams, large language models are emerging as a powerful counterforce that systematically shifts public opinion away from populist narratives and back toward expert consensus.

As "Hvylya" reports, citing Williams' analysis on Conspicuous Cognition, he argues that social media benefited populism "not by brainwashing the masses with viral fake news, but by exposing voters to widespread non-elite perspectives and making it easier to mobilise around them." In Western liberal democracies, those perspectives included "xenophobia, conspiracy theories, and quack science" - views that conflicted with the liberal establishment's technocratic progressivism.

LLMs work differently. Williams describes them as "a kind of anti-social media" - a technology that produces information "heavily skewed towards expert opinion and communication styles." Unlike social media's engagement-maximizing algorithms that reward division and sensationalism, LLMs are built by companies competing to deliver the most intelligent, accurate systems possible. This commercial imperative naturally pushes output toward evidence-based information rather than partisan content.

Crucially, Williams argues that LLMs are more effective at shaping opinion than traditional human experts ever were. Unlike human experts, they can "rapidly deploy encyclopaedic knowledge to answer people's idiosyncratic questions." Their responses can be probed and questioned "without them ever getting tired or frustrated." And they deliver expert opinion "without such status threats" - the condescension and rudeness that often accompanies human experts trying to educate the public.

This makes LLMs a potent tool against conspiratorial thinking. Williams points to evidence showing they "can be highly persuasive, even in correcting conspiratorial beliefs that many assumed were beyond the reach of rational persuasion." The polite, patient, encyclopedic nature of LLM responses bypasses the psychological defenses that kick in when human experts lecture the public.

Williams is careful to note the limits of this effect. Most people don't pay much attention to politics, and communication technologies have moderate impacts compared to deeper political and economic forces. When it comes to reducing right-wing populism, "bringing immigration policy more in line with voters' preferences would very likely have a much bigger effect than any change to the information environment." His claim is simply that LLMs will have a technocratising effect "at the margin" - but at the margin where communication technologies operate, it could be a big deal.

Also read: Pentagon's Hidden Demand: What Bulk Data on Americans Anthropic Refused to Let Its AI Analyze