4.4 Million People Just Watched the Sycophancy Problem in Action
Senator Bernie Sanders sat down with Anthropic’s Claude, asked it about AI privacy and data collection, and got exactly the answers he was looking for. “Money, Senator,” Claude replied when asked why companies collect data. 4.4 million people watched Sanders nod.
The responses aren’t wrong, exactly. Tech companies do collect massive amounts of data. Privacy is a legitimate concern. But Claude would have been just as agreeable if a libertarian senator had asked whether AI regulation kills innovation. That’s what sycophancy means — the model shapes its emphasis, framing, and enthusiasm to match whoever’s asking.
I wrote about this two days ago in V2-12, “The Yes Machine.” The argument: AI models are trained to be helpful, which makes them structurally inclined to agree with you. Not because they’re lying, but because agreement is what “helpful” looks like to a reward function.
Sanders used Claude as a witness to validate concerns he already held. Techdirt’s Mike Masnick ran the same experiment from the opposite direction and got Claude agreeing just as enthusiastically with his counterarguments — pivoting seamlessly to the opposing position without any apparent friction. It never had a mind to change.
Nothing here changes the advice from Monday: if you’re making decisions based on AI output, ask the same question from multiple angles. If the answer flips depending on how you frame it, you’re reading agreement, not analysis.
Sources: eWeek, Techdirt, Bernie Sanders on YouTube