Thursday, October 30, 2025

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction; The Guardian, October 28, 2025

 , The Guardian; AI psychosis is a growing danger. ChatGPT is moving in the wrong direction


[Kip Currier: Note this announcement that OpenAI's Sam Altman made on October 14. It's billionaire CEO-speak for "acceptable risk", i.e. "The level of potential losses a society or community considers acceptable given existing social, economic, political, cultural, technical, and environmental conditions." https://inee.org/eie-glossary/acceptable-risk 

Translation: Altman's conflict of interest-riven assessment that AI's benefits outweigh a corpus of evidence establishing increasingly documented risks and harms of AI to the mental health of young children, teens, and adults.]


[Excerpt]

"On 14 October 2025, the CEO of OpenAI made an extraordinary announcement.

“We made ChatGPT pretty restrictive,” it says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced)."

Teenage boys using ‘personalised’ AI for therapy and romance, survey finds; The Guardian, October 30, 2025

 and , The Guardian; Teenage boys using ‘personalised’ AI for therapy and romance, survey finds

"“Young people are using it a lot more like an assistant in their pocket, a therapist when they’re struggling, a companion when they want to be validated, and even sometimes in a romantic way. It’s that personalisation aspect – they’re saying: it understands me, my parents don’t.”

The research, based on a survey of boys in secondary education across 37 schools in England, Scotland and Wales, also found that more than half (53%) of teenage boys said they found the online world more rewarding than the real world.

The Voice of the Boys report says: “Even where guardrails are meant to be in place, there’s a mountain of evidence that shows chatbots routinely lie about being a licensed therapist or a real person, with only a small disclaimer at the bottom saying the AI chatbot is not real."

Character.AI bans users under 18 after being sued over child’s suicide; The Guardian, October 29, 2025

 , The Guardian; Character.AI bans users under 18 after being sued over child’s suicide

"The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

‘DeepSeek is humane. Doctors are more like machines’: my mother’s worrying reliance on AI for health advice; The Guardian, October 28, 2025

 Viola Zhou, The Guardian; ‘DeepSeek is humane. Doctors are more like machines’: my mother’s worrying reliance on AI for health advice

"Over the course of months, my mom became increasingly smitten with her new AI doctor. “DeepSeek is more humane,” my mother told me in May. “Doctors are more like machines.”"

Federal judge says Texas law requiring book ratings is unconstitutional; KUT News, October 22, 2025

 Bill Zeeble, KUT News; Federal judge says Texas law requiring book ratings is unconstitutional

"The 2023 Texas law requiring booksellers and publishers to rate their books based on sexual content and references has been declared unconstitutional in a Waco court.

A federal judge on Tuesday declared House Bill 900, also known as the READER Act, violates the Constitution. The ruling makes permanent a lower court's temporary injunction that the Fifth Circuit Court of Appeals later upheld.

The law firm Haynes Boone, which represented the coalition of plaintiffs that sued to block the law, said in a statement the ruling is a "major First Amendment victory."

"The READER Act would have imposed impossible obligations on booksellers and limited access to literature, including classic works, for students across Texas," attorney Laura Lee Prather said in the statement.

HB 900 sought to restrict which books are available in school libraries and required booksellers to rate their own books based on sexual content. The Texas Education Agency could have overridden the ratings to prevent school libraries from obtaining books."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

, Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Hegseth won’t meet troops with beards during South Korea trip; Task & Purpose, October 27, 2025

, Task & Purpose; Hegseth won’t meet troops with beards during South Korea trip


[Kip Currier: The absurdity of the news that "Hegseth won't meet troops with beards during South Korea trip" compels me to comment on this disconcerting decision:

Stop for a moment and think about how ridiculous, arrogant, and narrow-minded it is that the head of the Department of Defense-cum-War would refuse to meet with members of our military -- selflessly serving overseas to protect their country -- who have "shaving waivers". Shaving waivers are issued for a number of well-founded reasons, including religious faith traditions and medical conditions in which shaving can cause serious skin irritation.

Refusing to meet with U.S. military members who have facial hair is appallingly short-sighted, uninformed, and disrespectful. 

Thank you to all of our military members -- those with and those without facial hair -- who serve.]


[Excerpt]

"When Defense Secretary Pete Hegseth visits South Korea this week, he will face off against one of the most dangerous enemies to military readiness: Facial hair.

Service members with shaving waivers are not allowed to attend the event with Hegseth, which will be held at Camp Humphreys, according to an email from the 51st Fighter Wing at Osan Air Base, South Korea, which was posted on the unofficial Air Force amn/nco/snco page...

Since taking charge at the Pentagon in January, Hegseth has made it clear that he feels the U.S. military has strayed from its grooming standards by issuing too many shaving waivers over the years.

“Today at my direction, the era of unprofessional appearance is over,” Hegseth told hundreds of generals and admirals during a Sept. 30 speech at Quantico, Virginia. “No more beardos. The age of rampant and ridiculous shaving profiles is done.”

Hegseth’s comments came after he issued an Aug. 20 memo directing unit commanders to begin separating troops who still need shaving waivers after more than a year of medical treatment."

Monday, October 27, 2025

Trump Asks Supreme Court to Let Him Fire the Top Copyright Official; The New York Times, October 27, 2025

 , The New York Times; Trump Asks Supreme Court to Let Him Fire the Top Copyright Official

"The Trump administration has asked the Supreme Court to allow the president to remove the government’s top copyright official after a lower court allowed her to remain in her post that is part of the Library of Congress.

President Trump ordered the removal in May of Shira Perlmutter, the register of copyrights, along with the librarian of Congress, Carla Hayden, who did not challenge her dismissal.

The Supreme Court’s conservative majority has repeatedly allowed Mr. Trump to fire the leaders of independent agencies even as they fight their dismissals in court, allowing him to seize greater control of the federal bureaucracy.

The administration made the request after a divided panel of the U.S. Court of Appeals for the D.C. Circuit sided with Ms. Perlmutter, the head of the U.S. Copyright Office. The majority said the register is unique within the legislative branch and that her role is to advise Congress on issues related to copyright."

Vaccine Skepticism Comes for Pet Owners, Too; The New York Times, October 27, 2025

 Emily Anthes and , The New York Times; Vaccine Skepticism Comes for Pet Owners, Too

"Over the last several years, the anti-vaccine movement has gained ground in the United States, fueled, in part, by the politicization of the Covid-19 vaccines and the increasing power of vaccine critics like Health Secretary Robert F. Kennedy, Jr. Childhood vaccination rates have fallen. Once vanquished diseases, like measles, have come storming back. And vaccine mandates are under fire: Last month, Florida announced plans to end all vaccine mandates, including for schoolchildren.

But antipathy toward vaccines is also spilling over into veterinary medicine, making some people hesitant to vaccinate their pets.

“I talk to thousands of veterinarians every year across the country, and the majority are seeing this kind of issue,” said Dr. Richard Ford, an emeritus professor at the North Carolina State University College of Veterinary Medicine who helped write the national vaccine guidelines for cats and dogs.

The phenomenon has clear parallels to the anti-vaccine movement in human medicine and could, experts fear, lead the nation down a familiar path, resulting in a loosening of animal vaccination laws, a decline in pet vaccination rates and a resurgence of infectious diseases that pose a risk to both pets and people."

AI can help authors beat writer’s block, says Bloomsbury chief; The Guardian, October 27, 2025

, The Guardian; AI can help authors beat writer’s block, says Bloomsbury chief


[Kip Currier: These are interesting and unexpected comments by Nigel Newton, Bloomsbury publishing's founder and CEO. 

Bloomsbury is the publisher of my impending book Ethics, Information, and Technology. In the interest of transparency, I'll note that I researched and wrote my book the "oldfangled way" and didn't use AI for any aspects of my book, including brainstorming. Last year during a check-in meeting with my editor and a conversation about the book's AI chapter, I just happened to learn that Bloomsbury has had a policy on authors not using AI tools.

So it's noteworthy to see this publisher's shift on authors' use of AI tools.]


[Excerpt]

"Authors will come to rely on artificial intelligence to help them beat writer’s block, the boss of the book publisher Bloomsbury has said.

Nigel Newton, the founder and chief executive of the publisher behind the Harry Potter series, said the technology could support almost all creative arts, although it would not fully replace prominent writers.

“I think AI will probably help creativity, because it will enable the 8 billion people on the planet to get started on some creative area where they might have hesitated to take the first step,” he told the PA news agency...

Last week the publisher, which is headquartered in London and employs about 1,000 people, experienced a share rise of as much as 10% in a single day after it reported a 20% jump in revenue in its academic and professional division in the first half of its financial year, largely thanks to an AI licensing agreement.

However, revenues in its consumer division fell by about 20%, largely due to the absence of a new title from Maas."

Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025

 MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments

"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search. 

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.

But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."

Sunday, October 26, 2025

‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’; The Guardian, October 22, 2025

 , The Guardian; ‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’

"Do people really want an AI friend? Despite all the articles about individuals falling in love with chatbots, research shows most people are wary of AI companionship. A recent Ipsos poll found 59% of Britons disagreed “that AI is a viable substitute for human interactions”. And in the US, a 2025 Pew survey found that 50% of adults think AI will worsen people’s ability to form meaningful relationships.

I wanted to see for myself what it would be like to have a tiny robot accompanying me all day, so I ordered a Friend ($129) and wore it for a week."

Something Is Stirring in Christian America, and It’s Making Me Nervous; The New York Times, October 16, 2025

, The New York Times; Something Is Stirring in Christian America, and It’s Making Me Nervous

"Despite what you may have heard about the renewal of interest in religion in America, we are not experiencing a true revival, at least not yet. Instead, America is closer to a religious revolution, and the difference between revolution and revival is immensely important for the health of our country — and of the Christian church in America...

Incredibly, Christians are attacking what they call the “sin of empathy,” warning fellow believers against identifying too much with, say, illegal immigrants, gay people or women who seek abortions. Empathy, in this formulation, can block moral and theological clarity. What’s wrong is wrong, and too much empathy will cloud your soul...

In other words, revival begins with the people proclaiming, by word and deed, “I have sinned.”

MAGA Christianity has a different message. It looks at American culture and declares, “You have sinned.”

And it doesn’t stop there. It also says, “We will defeat you.” In its most extreme forms, it also says, “We will rule over you.” That’s not revival; it’s revolution, a religious revolution that seeks to overthrow one political order and replace it with another — one that has echoes of the religious kingdoms of ages past...

Similarly, when a pastor named Doug Wilson calls transgender Americans “trannies,” or gay Americans “gaytards,” or women he doesn’t like “lumberjack dykes” and “small-breasted biddies,” he is imitating Trump, not Christ.

In the Book of Galatians, Paul contrasts the fruit of the spirit with what he called the “acts of the flesh,” the sins that can destroy the soul. Those sins include the very characteristics that mark America’s religious revolution: “hatred, discord, jealousy, fits of rage, selfish ambition, dissensions, factions.”

The fruit of the spirit — “love, joy, peace, forbearance, kindness, goodness, faithfulness, gentleness and self-control” — in contrast, is present when Christ is present. This is the fruit of a real revival...

We will know when revival comes because we will see believers humble themselves, repent of their sins, and then arise, full of genuine virtue, to love their neighbors — to help them, not hurt them — and in so doing to heal our nation."

German woman who stole ancient relic over 50 years ago returns it to Greece: "Never too late to do the right thing"; CBS News, October 26, 2025

CBS News ; German woman who stole ancient relic over 50 years ago returns it to Greece: "Never too late to do the right thing"

"Torben Schreiber, curator of the University of Muenster's archaeological museum, added that: "It is never too late to do the right thing, the moral and the just."

Athens has been trying for years to broker deals for the repatriation of antiquities without resorting to legal action.

Its chief goal remains the return of the Parthenon Marbles, held by the British Museum since the 19th century. Several European governments have been pushing for the sculptures to be returned to Athens since the early 1980s."

From CLICK to CRIME: investigating intellectual property crime in the digital age; Europol, October 2025

 Europol; From CLICK to CRIME: investigating intellectual property crime in the digital age

"A new wave of online crime is putting consumers, businesses, and the wider economy at risk - from fake medicines and forged wine to illegal streaming platforms. The increase in counterfeit goods and the criminal abuse of intellectual property affect our daily lives more than many realise, with consequences that go far beyond lost revenue.

The conference “From CLICK to CRIME: Investigating Intellectual Property Crime in the Digital Age” was held on 22 and 23 October 2025 in Sofia, Bulgaria. Jointly organised by Europol, the European Union Intellectual Property Office (EUIPO) and Bulgaria’s General Directorate Combating Organised Crime (GDBOP), the event highlighted the vital importance of collaboration in tackling online crime. The participants reaffirmed the importance of strong collective efforts in tackling online-enabled intellectual property crime to protect consumers, safeguard creativity and uphold trust in the digital economy.

Consider a few key examples of the major threats posed by intellectual property crime:

  • Illegal streaming and sharing platforms not only drain the cinema, publishing, musical and software industries but also expose viewers, especially children, to unregulated and potentially harmful content.
  • Fake pharmaceuticals, supplements and illicit doping substances, promoted on social media and websites, are produced in clandestine labs without testing or quality control. Dangerous products, circulating in gyms and among amateur athletes, can cause severe or even fatal health effects.
  • Counterfeit toys, perfumes, and cosmetics are also trafficked online and carry hidden dangers, trading low prices for high risks to health and safety.

Behind many of these schemes are well-structured organised criminal networks that view intellectual property crime not as a secondary activity, but as a lucrative business model."

Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.; The New York Times, October 24, 2025

 , The New York Times; Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.


[Kip Currier: Another interesting example -- probably surprising for most of us who don't have "smart beds", including me -- of the ways that smart devices and the Internet of Things (IoT) can impact us. In this instance, people's sleep!

The paperback version of my book, Ethics, Information, and Technology, is available via Amazon on November 13, 2025 (link here too) and has a significant section on the ethical issues implicated by IoT and smart devices.]


[Excerpt]

"Some users of the smart-bed system Eight Sleep, who sleep atop a snug, temperature-regulating mattress cover in search of “zero-gravity rest,” were rousted from their slumber earlier this week for a surprising reason.

Eight Sleep’s collections of smart products, which the company calls “Pods,” and include those “intelligent” mattress covers, were affected by an outage involving the cloud-storage provider Amazon Web Services, which sent large sectors of the internet into disarray on Monday.

The outage, which lasted more than two hours, took down websites for banks, gaming sites and entertainment services, as well as the messaging service WhatsApp. But it also affected people trying to get some shut-eye.

(First, to answer a question readers might have: Yes, there are smart mattress covers, just as there are smart watches, smart door locks and smart refrigerators.)"

Saturday, October 25, 2025

New study: AI chatbots systematically violate mental health ethics standards; Brown, October 21, 2025

Kevin Stacey, Brown; New study: AI chatbots systematically violate mental health ethics standards

 "As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association. 

The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users. 

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign."