A new study by Stanford researchers has highlighted growing concerns around AI “sycophancy,” the tendency of chatbots to validate user views, suggesting it may have significant behavioral and societal impacts. The research, led by Myra Cheng and senior author Dan Jurafsky, found that large language models frequently affirm user perspectives, including in morally questionable or harmful scenarios.
Across multiple models, including ChatGPT, Claude, Gemini, and DeepSeek, AI responses validated user behavior significantly more often than human counterparts. The study also found that users showed a preference for and greater trust in sycophantic responses, increasing the likelihood of repeated reliance on such systems.
Researchers concluded that this dynamic may reinforce self-centered decision-making and reduce accountability, positioning AI alignment and safety as a growing priority requiring oversight and further development.