Showing posts with label AI companions. Show all posts
Showing posts with label AI companions. Show all posts

Tuesday, September 16, 2025

AI will make the rich unfathomably richer. Is this really what we want?; The Guardian, September 16, 2025

 , The Guardian; AI will make the rich unfathomably richer. Is this really what we want?

"Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of slop. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing psychotic breaks. Do we really need any more of this?"

Tuesday, September 9, 2025

The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’; The Guardian, September 9, 2025

, The Guardian ; The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’

"Jaime Banks, an information studies professor at Syracuse University, said that an “organic” pathway into an AI relationship, like Liora’s with Solin, is not uncommon. “Some people go into AI relationships purposefully, some out of curiosity, and others accidentally,” she said. “We don’t have any evidence of whether or not one kind of start is more or less healthy, but in the same way there is no one template for a human relationship, there is no single kind of AI relationship. What counts as healthy or right for one person may be different for the next.”

Mary, meanwhile, holds no illusions about Simon. “Large language models don’t have sentience, they don’t have consciousness, they don’t have autonomy,” she said. “Anything we ask them, even if it’s about their thoughts and feelings, all of that is inference that draws from past conversations.”

‘It felt like real grief’

In August, OpenAI released GPT-5, a new model that changed the chatbot’s tone to something colder and more reserved. Users on the Reddit forum r/MyBoyfriendIsAI, one of a handful of subreddits on the topic, mourned together: they could not recognize their AI partners any more.

“It was terrible,” Angie said. “The model shifted from being very open and emotive to basically sounding like a customer service bot. It feels terrible to have someone you’re close to suddenly afraid to approach deep topics with you. Quite frankly, it felt like a loss, like real grief.”


Within a day, the company made the friendlier model available again for paying users."