Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Monday, November 17, 2025

Inside the old church where one trillion webpages are being saved; CNN, November 16, 2025

 , CNN; Inside the old church where one trillion webpages are being saved

"The Wayback Machine, a tool used by millions every day, has proven critical for academics and journalists searching for historical information on what corporations, people and governments have published online in the past, long after their websites have been updated or changed.

For many, the Wayback Machine is like a living history of the internet, and it just logged its trillionth page last month.

Archiving the web is more important and more challenging than ever before. The White House in January ordered vast amounts of government webpages to be taken down. Meanwhile, artificial intelligence is blurring the line between what’s real and what’s artificially generated — in some ways replacing the need to visit websites entirely. And more of the internet is now hidden behind paywalls or tucked in conversations with AI chatbots.

It’s the Internet Archive’s job to figure out how to preserve it all."

Saturday, November 15, 2025

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Friday, November 14, 2025

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

, The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

  , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Monday, November 3, 2025

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia; The Guardian, November 3, 2025

, The Guardian ; In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

"The eminent British historian Sir Richard Evans produced three expert witness reports for the libel trial involving the Holocaust denier David Irving, studied for a doctorate under the supervision of Theodore Zeldin, succeeded David Cannadine as Regius professor of history at Cambridge (a post endowed by Henry VIII) and supervised theses on Bismarck’s social policy.

That was some of what you could learn from Grokipedia, the AI-powered encyclopedia launched last week by the world’s richest person, Elon Musk. The problem was, as Prof Evans discovered when he logged on to check his own entry, all these facts were false.

It was part of a choppy start for humanity’s latest attempt to corral the sum of human knowledge or, as Musk put it, create a compendium of “the truth, the whole truth and nothing but the truth” – all revealed through the magic of his Grok artificial intelligence model."

Elon Musk launches encyclopedia ‘fact-checked’ by AI and aligning with rightwing views; The Guardian, October 28, 2025

, The Guardian ; Elon Musk launches encyclopedia ‘fact-checked’ by AI and aligning with rightwing views

"Elon Musk has launched an online encyclopedia named Grokipedia that he said relied on artificial intelligence and would align more with his rightwing views than Wikipedia, though many of its articles say they are based on Wikipedia itself.

Calling an AI encyclopedia “super important for civilization”, Musk had been planning the Wikipedia rival for at least a month. Grokipedia does not have human authors, unlike Wikipedia, which is written and edited by volunteers in a transparent process. Grokipedia said it is “fact-checked” by Grok, Musk’s AI chatbot.

Musk said the idea was suggested by the Trump administration’s AI and cryptocurrency czar, David Sacks.

Musk has frequently attacked Wikipedia for citing reporting by the New York Times and NPR, and regularly lambasts what he calls the “mainstream media” in an effort to encourage people to rely on X, formerly Twitter, the social media site he owns and which he has programmed to encourage the domination of conservative and far-right voices, including his own.

Grokipedia’s entries appear to hew closely to conservative talking points. For example, its entry for the January 6 insurrection on the Capitol cites “widespread claims of voting irregularities” – a lie pushed by Donald Trump and his allies to delegitimize Joe Biden’s victory in 2020 – and downplays Trump’s own role in inciting the riot."

Saturday, November 1, 2025

On Chatbot Psychosis and What Might Be Done to Address It; Santa Clara Markkula Center for Applied Ethics, October 31, 2025

Irina Raicu , Santa Clara Markkula Center for Applied Ethics; On Chatbot Psychosis and What Might Be Done to Address It

"Chatbot psychosis and various responses to it (technical, regulatory, etc.) confront us with a whole range of ethical issues. Register now and join us (online) on November 7 as we aim to unpack at least some of them in a conversation with Steven Adler."

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025

, Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit

"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.

U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."

Monday, October 27, 2025

Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025

 MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments

"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search. 

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.

But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."

Saturday, October 25, 2025

New study: AI chatbots systematically violate mental health ethics standards; Brown, October 21, 2025

Kevin Stacey, Brown; New study: AI chatbots systematically violate mental health ethics standards

 "As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association. 

The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users. 

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

Monday, September 22, 2025

Can AI chatbots trigger psychosis? What the science says; Nature, September 18, 2025

Rachel Fieldhouse, Nature; Can AI chatbots trigger psychosis? What the science says

 "Accounts of people developing psychosis — which renders them unable to distinguish between what is and is not reality — after interacting with generative artificial intelligence (AI) chatbots have increased in the past few months.

At least 17 people have been reported to have developed psychosis, according to a preprint posted online last month1. After engaging with chatbots such as ChatGPT and Microsoft Copilot, some of these people experienced spiritual awakenings or uncovered what they thought were conspiracies.

So far, there has been little research into this rare phenomenon, called AI psychosis, and most of what we know comes from individual instances. Nature explores the emerging theories and evidence, and what AI companies are doing about the problem."

Monday, September 15, 2025

FINDING GOD in the APP STORE; The New York Times, September 14, 2025

 , The New York Times; FINDING GOD in the APP STORE

"God works in mysterious ways — including through chatbots. At least, that’s what many people seem to think.

On religious apps, tens of millions of people are confessing to spiritual chatbots their secrets: their petty vanities and deepest worries, gluttonous urges and darkest impulses. Trained on religious texts, the bots are like on-call priests, imams or rabbis, offering comfort and direction at any time. On some platforms, they even purport to channel God."

Friday, August 29, 2025

ChatGPT offered bomb recipes and hacking tips during safety tests; The Guardian, August 28, 2025

 , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests

"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.

The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

, Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Monday, August 25, 2025

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."