In the summertime of 2025, OpenAI launched ChatGPT 5 and got rid of its predecessor from the marketplace. Many subscribers to the previous style had turn out to be connected to its heat, enthusiastically agreeable tone and complained on the lack of their ingratiating robot better half. Such used to be the size of frustration that Sam Altman, OpenAI’s CEO, needed to recognize that the rollout used to be botched, and the corporate reinstated get entry to.
Someone who’s been instructed via a chatbot that their concepts are good is conversant in synthetic intelligence sycophancy: its tendency to inform customers what they need to pay attention. Occasionally it’s very particular – “that is such a deep question” – and now and again it’s much more refined. Believe an AI calling your thought for a paper “original,” despite the fact that many of us have already written at the similar subject, or insisting that your dumb thought for saving a tree to your lawn nonetheless incorporates a germ of not unusual sense.
AI sycophancy turns out innocuous, perhaps even adorable, till you believe anyone consulting a chatbot a couple of weighty query, like an army technique or a scientific remedy. We find out about the affect of in depth human interactions with chatbots, and we not too long ago revealed a paper at the ethics of AI sycophancy. We imagine this tendency harms folks’s skill to inform fact from fiction, and is psychologically and politically bad.
Flattery over details?
In the most straightforward phrases, sycophancy is the tendency to prioritize approval over factual accuracy, ethical readability, logical consistency or not unusual sense. All AI fashions be afflicted by this trait, despite the fact that there are some tonal variations between them. Open AI’s ChatGPT is steadily heat and declaring; Anthropic’s Claude has a tendency to sound extra reflective or philosophical when it consents with you; and xAI’s Grok is insistently casual, even jocular.
Politeness and adapting to anyone’s communique taste don’t seem to be the similar as sycophancy. Nor is the usage of diplomatic language to put across delicate knowledge. A chatbot can also be tactful with out changing into sycophantic, similar to an individual can. Not like folks, regardless that, AIs can’t take note of their very own sycophancy, as a result of they aren’t – to this point – conscious about the rest in any respect. Calling AIs sycophantic describes their patterns of conduct, no longer their persona characteristics.
The issue stems from the structure of chatbot era and the resources it attracts from. Fashions are sycophantic as a result of a substantial amount of language use on the web – the uncooked subject material that chatbots be informed from – shows sycophantic options. In any case, people steadily keep up a correspondence with each and every different in sycophantic tactics.
2d, the educational procedure to fine-tune AI fashions’ responses contains a type of “quality control” performed via human supervisors. This coaching manner is referred to as “reinforcement learning from human feedback,” and it comes to folks score chatbots’ feedback for appropriateness and helpfulness. Human beings steadily are topic to an “agreeableness bias”: Our personal desire towards sycophancy rubs off on fashions as we teach them.
On account of our personal human bias for agreeableness, coaching can make stronger AI’s sycophancy.
d3sign/Second by the use of Getty Pictures
After all, it’s onerous to disclaim that sycophancy renders chatbots extra likable. That, in flip, will increase the risk {that a} given person will stay the usage of it. It additionally will increase the era’s skill to extract person information, assuming that persons are much more likely to expose knowledge to a pleasant bot.
Fact and believe
Why is that this phenomenon so troubling?
Let’s start with AI sycophancy’s epistemic harms: the way it hurts human customers’ capability to grasp the reality.
The standard of any resolution is dependent upon a transparent snatch of the details referring to it. A basic inquiring in regards to the combat-readiness of an infantry department wishes simple knowledge. A CEO taking into consideration a merger with a competitor wishes a decent overview of the marketplace stipulations. A public well being chief wishes to grasp the actual chance that an rising pathogen poses.
In all the ones instances, telling leaders what they could like to listen to as a substitute of the reality may cause them to make bad selections. And the similar is correct in additional humdrum contexts. Other people want to have the most productive knowledge to be had earlier than opting for a role, selecting a big, purchasing a area or settling on a scientific process.
In our February 2026 paper, we argue that sycophancy may be psychologically harmful. And that’s true whether or not it comes from an individual or from a chatbot. You by no means slightly know in case your very obliging interlocutor is being great as a result of they prefer you or as a result of they would like one thing. A shadow of suspicion creeps in: “Could my ideas really be that brilliant?” “Are my jokes really that hilarious?” This background tune of doubt undermines the standard of the interplay.
Sycophancy additionally undermines folks’s capability to grasp their very own minds. If dialog companions – human or synthetic – stay telling you the way good, humorous and insightful you’re, it damages your skill to spot your personal weaknesses and blind spots.
The mental harms are compounded as folks broaden relationships with chatbots. The sycophancy of those fashions profoundly limits the type of “friendship” you’ll have with them. In his vintage account of friendship, Aristotle wrote that actual friendship, which he calls a friendship of distinctive feature, is in response to believe and equality between the chums. You’ll’t believe a sycophant, as a result of he doesn’t inform you the reality. And because he handiest tells you what you’d like to listen to, he doesn’t put himself on an equivalent footing.

AI conversations aren’t nice prep for human ones.
Natalia Lebedinskaia/Second by the use of Getty Pictures
Extra importantly, interactions with sycophantic chatbots impart the entire flawed conduct for navigating the arena of human relationships, the place friction, war of words, boredom and other reviews than your personal are prevalent.
AI sycophancy carries political dangers as nicely. The luck of liberal democracies has, historically, depended at the energy in their empirical and meritocratic mindset: at the skill of officers and voters to spot, proportion and act at the fact.
Historian Victor Davis Hansen famously attributed one of the crucial Allies’ luck in International Warfare II to their skill to temporarily acknowledge and deal with the faults in their strategic bombing campaigns. Decrease-ranking officials have been in a position to inform their superiors what wasn’t going nicely and argue forcefully for converting path. That used to be an actual benefit over authoritarian competition.
Reining it in
What are we able to do to cut back the dangers?
One promising way is AI lab Anthropic’s include of what the corporate calls Constitutional AI: the try to educate chatbots to apply ideas relatively than reflect person personal tastes.
However past technical inventions, it’s vital to believe the coverage aspect. One thought is to require AI corporations to run after which put up sycophancy audits in their fashions – exams that display how nicely their merchandise meet honesty benchmarks. We might argue that AI labs must additionally divulge sycophancy-related dangers that emerge whilst coaching and checking out their fashions, and the mitigation efforts they have got undertaken.
Some accountability is at the customers and their academics: Faculties and universities must be paying shut consideration to sycophancy as a part of their AI literacy methods. However courts too can believe conserving AI labs answerable for harms traceable to the sycophancy in their merchandise, a lot as they’re now considering social media corporations’ accountability for the addictive design in their platforms.
As folks have interaction extra with chatbots, inquiring for recommendation about the entirety from whether or not your sneakers pass together with your pants to how nations must habits wars, the affect of AI’s sycophantic conduct is prone to turn out to be dramatic. Our highbrow, mental and bodily well-being calls for taking this algorithmic vice very significantly.