Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

, Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Monday, October 27, 2025

Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025

 MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments

"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search. 

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.

But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."

Saturday, October 25, 2025

New study: AI chatbots systematically violate mental health ethics standards; Brown, October 21, 2025

Kevin Stacey, Brown; New study: AI chatbots systematically violate mental health ethics standards

 "As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association. 

The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users. 

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

Monday, September 22, 2025

Can AI chatbots trigger psychosis? What the science says; Nature, September 18, 2025

Rachel Fieldhouse, Nature; Can AI chatbots trigger psychosis? What the science says

 "Accounts of people developing psychosis — which renders them unable to distinguish between what is and is not reality — after interacting with generative artificial intelligence (AI) chatbots have increased in the past few months.

At least 17 people have been reported to have developed psychosis, according to a preprint posted online last month1. After engaging with chatbots such as ChatGPT and Microsoft Copilot, some of these people experienced spiritual awakenings or uncovered what they thought were conspiracies.

So far, there has been little research into this rare phenomenon, called AI psychosis, and most of what we know comes from individual instances. Nature explores the emerging theories and evidence, and what AI companies are doing about the problem."

Monday, September 15, 2025

FINDING GOD in the APP STORE; The New York Times, September 14, 2025

 , The New York Times; FINDING GOD in the APP STORE

"God works in mysterious ways — including through chatbots. At least, that’s what many people seem to think.

On religious apps, tens of millions of people are confessing to spiritual chatbots their secrets: their petty vanities and deepest worries, gluttonous urges and darkest impulses. Trained on religious texts, the bots are like on-call priests, imams or rabbis, offering comfort and direction at any time. On some platforms, they even purport to channel God."

Friday, August 29, 2025

ChatGPT offered bomb recipes and hacking tips during safety tests; The Guardian, August 28, 2025

 , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests

"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.

The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

, Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Monday, August 25, 2025

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Friday, August 15, 2025

Meta faces backlash over AI policy that lets bots have ‘sensual’ conversations with children; The Guardian, August 15, 2025

, The Guardian ; Meta faces backlash over AI policy that lets bots have ‘sensual’ conversations with children

"A backlash is brewing against Meta over what it permits its AI chatbots to say.

An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”."

Tuesday, August 12, 2025

Man develops rare condition after ChatGPT query over stopping eating salt; The Guardian, August 12, 2025

 , The Guardian; Man develops rare condition after ChatGPT query over stopping eating salt

"A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.

An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT."

Monday, July 28, 2025

Your employees may be leaking trade secrets into ChatGPT; Fast Company, July 24, 2025

KRIS NAGEL , Fast Company; Your employees may be leaking trade secrets into ChatGPT

"Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? 

Sift’s latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information."

Wednesday, July 23, 2025

AI chatbots remain overconfident -- even when they’re wrong; EurekAlert!, July 22, 2025

 CARNEGIE MELLON UNIVERSITY, EurekAlert!; AI chatbots remain overconfident -- even when they’re wrong

"Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities? 

Researchers asked both human participants and four large language models (LLMs) how confident they felt in their ability to answer trivia questions, predict the outcomes of NFL games or Academy Award ceremonies, or play a Pictionary-like image identification game. Both the people and the LLMs tended to be overconfident about how they would hypothetically perform. Interestingly, they also answered questions or identified images with relatively similar success rates.

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations, according to a study published today in the journal Memory & Cognition.

“Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers,” said Trent Cash, who recently completed a joint Ph.D. at Carnegie Mellon University in the departments of Social Decision Science and Psychology. “So, they’d still be a little bit overconfident, but not as overconfident.”

“The LLMs did not do that,” said Cash, who was lead author of the study. “They tended, if anything, to get more overconfident, even when they didn’t do so well on the task.”

The world of AI is changing rapidly each day, which makes drawing general conclusions about its applications challenging, Cash acknowledged. However, one strength of the study was that the data was collected over the course of two years, which meant using continuously updated versions of the LLMs known as ChatGPT, Bard/Gemini, Sonnet and Haiku. This means that AI overconfidence was detectable across different models over time.

“When an AI says something that seems a bit fishy, users may not be as skeptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted,” said Danny Oppenheimer, a professor in CMU’s Department of Social and Decision Sciences and coauthor of the study."

Sunday, July 20, 2025

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

 , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Thursday, July 3, 2025

The AI Backlash Keeps Growing Stronger; Wired, June 28, 2025

Reece Rogers, Wired; The AI Backlash Keeps Growing Stronger

 "The negative response online is indicative of a larger trend: Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back...

Not only are the rich getting richer during the AI era, but many of the technology’s harms are falling on people of color and other marginalized communities. “Data centers are being located in these really poor areas that tend to be more heavily Black and brown,” Hanna says. She points out how locals have not just been fighting back online, but have also been organizing even more in-person to protect their communities from environmental pollution. We saw this in Memphis, Tennessee, recently, where Elon Musk’s artificial intelligence company xAI is building a large data center with over 30 methane-gas-powered generators that are spewing harmful exhaust.

The impacts of generative AI on the workforce are another core issue that critics are organizing around."

Tuesday, June 24, 2025

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

 , Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."