One of the most compelling arguments against AI chatbots as a force for good is the sycophancy problem: LLMs tend to flatter users, reinforce their biases, and tell them what they want to hear. Combined with increasing personalization, this could theoretically create information environments even more bespoke and distorted than social media filter bubbles. Academic philosopher Dan Williams has tackled this objection head-on - and concluded it won't hold.
As reported by "Hvylya", Williams addresses the sycophancy concern in his essay on Conspicuous Cognition, acknowledging that sycophancy and personalization are real phenomena that "run counter to this essay's basic thesis." He concedes that in rare cases of "AI psychosis," chat histories show LLMs "corroborating and reinforcing delusions, sometimes with tragic results." The concern is serious.
But Williams marshals several arguments for why the sycophancy trap won't define the technology. First, many people use LLMs for simple, context-free information requests where bias doesn't arise. A recent study found that people frequently ask Grok on X to fact-check posts, "including information from politicians and pundits on their own side," suggesting they consult these systems "out of genuine curiosity, not merely for partisan reasons or to rationalise their preconceptions." Another study showed that using LLMs for political information "increased users' belief accuracy without increasing belief in misinformation."
Second, Williams challenges the assumption that people primarily want reinforcement. "Motivated reasoning is a powerful force, but so is the desire to discover what's true," he writes. He speculates that sycophancy could actually help people accept corrections - precisely because they are "delivered in a friendly, respectful manner, free of insults and condescension," people might be more receptive to factual information that challenges their views.
Third, AI companies face accountability pressures that social media platforms largely avoided. Companies "can more easily be held accountable - both reputationally but also, in some contexts, legally - for the information their LLMs disseminate." This incentive is "very different from social media platforms, where companies can more plausibly claim that they are not responsible for the viewpoints expressed on them." Williams notes that leading AI companies already appear to be reducing model sycophancy, and from his own testing, "it is very challenging to get them to affirm even highly popular forms of misinformation and conspiracy theories."
The bottom line, Williams argues, is a question of comparison. "We already live in a world in which people can easily find low-quality reinforcement and rationalisation of their preferred beliefs through existing media channels." LLMs will produce "much more reliable, expert-aligned information than most of these real-world alternatives, even if sycophancy and personalisation introduce genuine biases."
Also read: "A Gradient, Not a Wall": Why Anthropic Rejected the Pentagon Compromise OpenAI Later Accepted
