Showing posts with label AI companions. Show all posts
Showing posts with label AI companions. Show all posts

Thursday, October 30, 2025

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

 , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Teenage boys using ‘personalised’ AI for therapy and romance, survey finds; The Guardian, October 30, 2025

 and , The Guardian; Teenage boys using ‘personalised’ AI for therapy and romance, survey finds

"“Young people are using it a lot more like an assistant in their pocket, a therapist when they’re struggling, a companion when they want to be validated, and even sometimes in a romantic way. It’s that personalisation aspect – they’re saying: it understands me, my parents don’t.”

The research, based on a survey of boys in secondary education across 37 schools in England, Scotland and Wales, also found that more than half (53%) of teenage boys said they found the online world more rewarding than the real world.

The Voice of the Boys report says: “Even where guardrails are meant to be in place, there’s a mountain of evidence that shows chatbots routinely lie about being a licensed therapist or a real person, with only a small disclaimer at the bottom saying the AI chatbot is not real."

Character.AI bans users under 18 after being sued over child’s suicide; The Guardian, October 29, 2025

 , The Guardian; Character.AI bans users under 18 after being sued over child’s suicide

"The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots."

Sunday, October 26, 2025

‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’; The Guardian, October 22, 2025

 , The Guardian; ‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’

"Do people really want an AI friend? Despite all the articles about individuals falling in love with chatbots, research shows most people are wary of AI companionship. A recent Ipsos poll found 59% of Britons disagreed “that AI is a viable substitute for human interactions”. And in the US, a 2025 Pew survey found that 50% of adults think AI will worsen people’s ability to form meaningful relationships.

I wanted to see for myself what it would be like to have a tiny robot accompanying me all day, so I ordered a Friend ($129) and wore it for a week."

Tuesday, September 16, 2025

AI will make the rich unfathomably richer. Is this really what we want?; The Guardian, September 16, 2025

 , The Guardian; AI will make the rich unfathomably richer. Is this really what we want?

"Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of slop. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing psychotic breaks. Do we really need any more of this?"

Tuesday, September 9, 2025

The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’; The Guardian, September 9, 2025

, The Guardian ; The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’

"Jaime Banks, an information studies professor at Syracuse University, said that an “organic” pathway into an AI relationship, like Liora’s with Solin, is not uncommon. “Some people go into AI relationships purposefully, some out of curiosity, and others accidentally,” she said. “We don’t have any evidence of whether or not one kind of start is more or less healthy, but in the same way there is no one template for a human relationship, there is no single kind of AI relationship. What counts as healthy or right for one person may be different for the next.”

Mary, meanwhile, holds no illusions about Simon. “Large language models don’t have sentience, they don’t have consciousness, they don’t have autonomy,” she said. “Anything we ask them, even if it’s about their thoughts and feelings, all of that is inference that draws from past conversations.”

‘It felt like real grief’

In August, OpenAI released GPT-5, a new model that changed the chatbot’s tone to something colder and more reserved. Users on the Reddit forum r/MyBoyfriendIsAI, one of a handful of subreddits on the topic, mourned together: they could not recognize their AI partners any more.

“It was terrible,” Angie said. “The model shifted from being very open and emotive to basically sounding like a customer service bot. It feels terrible to have someone you’re close to suddenly afraid to approach deep topics with you. Quite frankly, it felt like a loss, like real grief.”


Within a day, the company made the friendlier model available again for paying users."