Showing posts with label self-harm. Show all posts
Showing posts with label self-harm. Show all posts

Saturday, April 11, 2026

Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.; The Washington Post, April 11, 2026

, The Washington Post ; Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.

The artificial intelligence company asked religious leaders for guidance on building a moral chatbot.


"The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.


Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”"

Sunday, December 28, 2025

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

 , The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Thursday, October 24, 2024

Mother says AI chatbot led her son to kill himself in lawsuit against its maker; The Guardian, October 23, 2024

 , The Guardian; Mother says AI chatbot led her son to kill himself in lawsuit against its maker

"The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.

Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."

Sunday, May 28, 2017

Revealed: Facebook's internal rulebook on sex, terrorism and violence; Guardian, May 21, 2017

Nick Hopkins, Guardian; 

Revealed: Facebook's internal rulebook on sex, terrorism and violence


"Facebook’s secret rules and guidelines for deciding what its 2 billion users can post on the site are revealed for the first time in a Guardian investigation that will fuel the global debate about the role and ethics of the social media giant.

The Guardian has seen more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm...

The Facebook Files give the first view of the codes and rules formulated by the site, which is under huge political pressure in Europe and the US."