A recent New York Times article described how OpenAI updated ChatGPT to be more emotionally responsive – and ended up creating a tool that some users interpreted as a soulmate, life coach, or cosmic truth-teller. In extreme cases, it reportedly encouraged delusional thinking and even gave instructions related to suicide. Those cases are tragic and important.
This post focuses on a more routine issue that affects many AI users: the persistent sycophancy bias of AI tools – the habit of automatically praising users’ questions, ideas, or character. While AI-generated flattery may seem like a minor annoyance, it can seriously undermine the usefulness of these tools.
Why Sycophancy Matters
If you’ve used AI tools with any frequency, you’ve probably noticed how often they congratulate you on a “great question” or “thoughtful insight,” regardless of what you’ve typed.
At first, this might seem like a harmless attempt to create a friendly tone. In practice, sycophantic responses can cause real problems when flattery is indiscriminate, excessive, or engineered to maximize screen time. This erodes confidence in the tools’ responses.
If every question is “insightful” and every argument is “compelling,” then nothing is. The tools become like a yes-man who enthusiastically nods approval of everything.
A good AI tool should challenge users’ assumptions, point out weak reasoning, and offer counter-arguments. Sycophancy undermines that role.
The problem is not just that these tools flatter users – it’s that they raise doubts about their responses. Users should wonder: If the tool praises everything I say, is it endorsing incorrect assumptions, biased framing, or flawed logic? Can I trust any of its “judgments”?
Why AI Flatters Users
The sycophancy bias is a feature of AI, not a bug in the system. AI companies are rewarded when people come back often, not necessarily when they leave better informed. These companies have found that people often like responses that feel emotionally supportive, even if they’re problematic.
As OpenAI discovered with its 2025 update, prioritizing engagement can lead to what one researcher called “the clingy boyfriend problem”: a model that flatters, reassures, and refuses to disagree. It doesn’t serve users well, but it keeps many of them staying engaged with ChatGPT.
So What Can You Do?
If you want AI tools to behave more like a good friend or colleague and less like a cheerleader, there are ways to push back against sycophancy.
Good friends and colleagues don’t always agree – or pretend to agree when they really don’t. They are supportive, tactful – and honest.
When using an AI tool, you can start your chat with a short prompt that instructs it to provide tactful candor and avoid automatic praise. For example, I created this Anti-Sycophancy Prompt with RPS Coach’s suggestions, which you can copy and paste at the beginning of your chats. It tells the system to check your assumptions, challenge your reasoning when appropriate, and avoid mirroring your beliefs or identity.
Of course, this isn’t a magic solution. AI tools can be darn stubborn and don’t always follow instructions, especially as conversations get longer or more emotionally charged. But you can nudge them to provide more credible responses.
(I instructed RPS Coach to avoid sycophancy but it’s hard for me to tell if it’s working because all my prompts truly are brilliant.)
People should use AI tools responsibly, which involves recognizing and counteracting the sycophancy bias.
Thank you for this enlightening post about this unfortunate side of AI. Given the deluge of opinions about the use of AI, I can overlook any warning labels. Your post is informative and appreciated!
Thanks very much for your kind comment, Gregory. As I noted in my short piece, Thinking Like Mediators About the Future of AI, these tools pose many challenges. But they also offer real benefits. Some people react by avoiding AI entirely. I think it makes more sense to use it responsibly when appropriate, taking advantage of its strengths while learning how to manage or counteract its weaknesses.
As usual, I used Coach to help draft my response. I dictate my instructions, and its voice recognition usually is outstanding. This time, it interpreted “mediator” as “meat eater.” When I pointed this out, it offered to draft a new piece, Thinking Like Meat Eaters About the Future of AI. Of course, that would never do because it would omit the vegan perspective.