Large language models like ChatGPT, Claude, and Gemini are quietly undoing social media's most consequential legacy - the shift of influence away from experts and toward popular biases, conspiracy theories, and sensationalism. Academic philosopher Dan Williams has laid out a detailed case that LLMs represent a fundamentally new kind of communication technology, one that pushes public opinion back toward evidence-based, expert-aligned information.

As "Hvylya" reports, citing Williams' essay on Conspicuous Cognition, the philosopher describes LLMs as "a kind of anti-social media." Where social media has been "democratising, epistemically diverging, engagement-optimised, and performative," LLMs are "technocratising, epistemically converging, accuracy-optimised, and polite."

Williams traces the arc of communication technologies from the printing press through radio, television, and social media, showing how each shifted the balance between elite gatekeepers and ordinary people. Social media, he argues, was a "radically democratising technology" that allowed anyone to bypass traditional gatekeepers, but it also mainstreamed conspiracy theories, bigotry, and sensationalism by filtering public debate through engagement-maximizing algorithms.

LLMs push in the opposite direction. The major AI companies are competing to build the most intelligent systems possible for vast, diverse user bases, including businesses that depend on reliable information. This goal - "reaping huge profits by putting expert-level intelligence in everyone's hands" - inherently cuts against producing partisan or misinformative content. Fire up any leading LLM and ask about a politically charged topic, Williams suggests, then compare the response with what you find scrolling social media. The difference in accuracy and nuance will be immediately obvious.

The philosopher acknowledges this thesis sounds counterintuitive in a discourse dominated by warnings about hallucinations, bias, and AI disinformation. But he insists the critics are "missing the forest for the trees." The most consequential feature of LLMs for public opinion is simple: "it greatly improves people's access to accurate, evidence-based information." That feature gets little attention precisely because it doesn't connect to threats that capture attention or help anyone demonize Big Tech.

Williams frames the development as a partial return to what journalist Walter Lippmann envisioned over a century ago - "intelligence bureaus" that deploy scientific methods to assemble facts for citizens and policymakers. LLMs are fulfilling that vision "in a form he could never have imagined," though the classic problems with technocracy - expert bias and the benefits of democratic debate - remain very much alive.

Also read: "A Gradient, Not a Wall": Why Anthropic Rejected the Pentagon Compromise OpenAI Later Accepted