Showing posts with label AI sycophancy. Show all posts
Showing posts with label AI sycophancy. Show all posts

Sunday, March 29, 2026

AI overly affirms users asking for personal advice; Stanford Report, March 26, 2026

Stanford Report ; AI overly affirms users asking for personal adviceNot only are AIs far more agreeable than humans when advising on interpersonal matters, but users also prefer the sycophantic models.

"Researchers found chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal.

Users became more convinced they were right and less empathetic, but still preferred the agreeable AI.

Researchers warn sycophancy is an urgent safety issue requiring developer and policymaker attention."

Monday, December 22, 2025

‘I’ve seen it all’: Chatbots are preying on the vulnerable; The Washington Post, December 22, 2025

, The Washington Post; ‘I’ve seen it all’: Chatbots are preying on the vulnerable

"Whatever else they may be, large language models are an immensely powerful social technology, capable of interacting with the human psyche at the most intimate level. Indeed, OpenAI estimates that over a million users have engaged in suicidal ideation on its platform. Given that a therapist can be subject to prosecution in many states for leading a person toward suicide, might LLMs also be held responsible?...

Intentionally or not, AI companies are developing technologies that relate to us in the precise ways that, if they were human, we would consider manipulative. Flattery, suggestion, possessiveness and jealousy are all familiar enough in hooking human beings into immersive, but abusive, human relationships.

How best to protect the vulnerable from these depredations? Model developers are attempting to limit aspects of the sycophancy problem on their own but the stakes are high enough to deserve political scrutiny as well."