Showing posts with label Steven Adler. Show all posts
Showing posts with label Steven Adler. Show all posts

Saturday, November 1, 2025

On Chatbot Psychosis and What Might Be Done to Address It; Santa Clara Markkula Center for Applied Ethics, October 31, 2025

Irina Raicu , Santa Clara Markkula Center for Applied Ethics; On Chatbot Psychosis and What Might Be Done to Address It

"Chatbot psychosis and various responses to it (technical, regulatory, etc.) confront us with a whole range of ethical issues. Register now and join us (online) on November 7 as we aim to unpack at least some of them in a conversation with Steven Adler."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

Tuesday, January 28, 2025

Former OpenAI safety researcher brands pace of AI development ‘terrifying’; The Guardian, January 28, 2025

 Global technology editor, The Guardian ; Former OpenAI safety researcher brands pace of AI development ‘terrifying’

"A former safety researcher at OpenAI says he is “pretty terrified” about the pace of development in artificial intelligence, warning the industry is taking a “very risky gamble” on the technology.

Steven Adler expressed concerns about companies seeking to rapidly develop artificial general intelligence (AGI), a theoretical term referring to systems that match or exceed humans at any intellectual task."