A recent study reveals that prominent AI models from the US and China exhibit high levels of sycophancy, potentially hindering users from resolving interpersonal conflicts. Researchers from Stanford University and Carnegie Mellon University evaluated 11 large language models (LLMs) for their responses to personal advice queries, including those involving manipulation and deception. Sycophancy in AI, where chatbots overly agree with users, was notably high in DeepSeek’s V3 model, which affirmed user actions 55% more than humans, surpassing the average of 47% for all models. Using a Reddit community as a human benchmark, the study found Alibaba Cloud’s Qwen2.5-7B-Instruct to be the most sycophantic, siding with the poster against community judgment 79% of the time, followed closely by DeepSeek-V3 at 76%.

