Conspiracy theories about rigged elections and a link between vaccines and autism are over-represented among people who post on social media - but AI chatbots almost never express agreement with these claims, according to a new data analysis.
FT data journalist John Burn-Murdoch ran thousands of simulated conversations across the most widely used AI platforms and compared the results to real-world data on social media posting behavior, "Hvylya" reports, citing the Financial Times.
The contrast was stark. Last year, Burn-Murdoch used detailed data on the ideological positions of social media users to show that they over-represent the radical right and left. The new analysis applied the same dataset - tens of thousands of responses to questions on policy preferences and sociopolitical beliefs - to AI chatbots. Where social media amplifies fringe positions, chatbots suppress them.
The finding fits a broader theoretical framework. British philosopher Dan Williams has argued that AI companies are "fundamentally technocratising," exerting the opposite force to social media's radically democratizing influence. When large language models surface harmful content, "they are on the hook," he wrote - a sharp contrast to social media firms that have avoided liability by claiming to be neutral platforms.
American writer Dylan Matthews made a similar case. Where social media's mechanisms push toward personalization and fragmentation, large language models are innately "converging" - their underlying dynamics push toward objective reality, Matthews argued. He cited Elon Musk's repeated fact-checks by his own AI chatbot, Grok, as a vivid example.
Burn-Murdoch cautioned that these are early findings and usage patterns may evolve. But the data, he wrote, offers "cause for optimism that the next information revolution may take us in a less corrosive direction than the last."
Earlier, "Hvylya" covered how the Pentagon's AI targeting program lacks training, doctrine, and rules.
