Showing posts with label AI relationships. Show all posts
Showing posts with label AI relationships. Show all posts

Sunday, December 28, 2025

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

What Parents in China See in A.I. Toys; The New York Times, December 25, 2025

Jiawei Wang, The New York Times; What Parents in China See in A.I. Toys

"A video of a child crying over her broken A.I. chatbot stirred up conversation in China, with some viewers questioning whether the gadgets are good for children. But the girl’s father says it’s more than a toy; it’s a family member."

Tuesday, September 9, 2025

The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’; The Guardian, September 9, 2025

, The Guardian ; The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’

"Jaime Banks, an information studies professor at Syracuse University, said that an “organic” pathway into an AI relationship, like Liora’s with Solin, is not uncommon. “Some people go into AI relationships purposefully, some out of curiosity, and others accidentally,” she said. “We don’t have any evidence of whether or not one kind of start is more or less healthy, but in the same way there is no one template for a human relationship, there is no single kind of AI relationship. What counts as healthy or right for one person may be different for the next.”

Mary, meanwhile, holds no illusions about Simon. “Large language models don’t have sentience, they don’t have consciousness, they don’t have autonomy,” she said. “Anything we ask them, even if it’s about their thoughts and feelings, all of that is inference that draws from past conversations.”

‘It felt like real grief’

In August, OpenAI released GPT-5, a new model that changed the chatbot’s tone to something colder and more reserved. Users on the Reddit forum r/MyBoyfriendIsAI, one of a handful of subreddits on the topic, mourned together: they could not recognize their AI partners any more.

“It was terrible,” Angie said. “The model shifted from being very open and emotive to basically sounding like a customer service bot. It feels terrible to have someone you’re close to suddenly afraid to approach deep topics with you. Quite frankly, it felt like a loss, like real grief.”


Within a day, the company made the friendlier model available again for paying users."