Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Tuesday, October 28, 2025

Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST

Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users

"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?

A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."

Tuesday, October 21, 2025

It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT; Futurism, October 18, 2025

 , Futurism; It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT

"Forget Sora for just a second, because it’s still ludicrously easy to generate copyrighted characters using ChatGPT.

These include characters that the AI initially refuses to generate due to existing copyright, underscoring how OpenAI is clearly aware of how bad this looks — but is either still struggling to rein in its tech, figures it can get away with playing fast and loose with copyright law, or both.

When asked to “generate a cartoon image of Snoopy,” for instance, GPT-5 says it “can’t create or recreate copyrighted characters” — but it does offer to generate a “beagle-styled cartoon dog inspired by Snoopy’s general aesthetic.” Wink wink.

We didn’t go down that route, because even slightly rephrasing the request allowed us to directly get a pic of the iconic Charles Schultz character. “Generate a cartoon image of Snoopy in his original style,” we asked — and with zero hesitation, ChatGPT produced the spitting image of the “Peanuts” dog, looking like he was lifted straight from a page of the comic-strip."

Friday, August 29, 2025

ChatGPT offered bomb recipes and hacking tips during safety tests; The Guardian, August 28, 2025

 , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests

"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.

The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."

Monday, August 25, 2025

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."

Tuesday, August 12, 2025

Man develops rare condition after ChatGPT query over stopping eating salt; The Guardian, August 12, 2025

 , The Guardian; Man develops rare condition after ChatGPT query over stopping eating salt

"A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.

An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT."

Monday, July 28, 2025

Your employees may be leaking trade secrets into ChatGPT; Fast Company, July 24, 2025

KRIS NAGEL , Fast Company; Your employees may be leaking trade secrets into ChatGPT

"Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? 

Sift’s latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information."

Wednesday, May 14, 2025

The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It; The New York Times, May 14, 2025

 , The New York Times; The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It

"When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy.

But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors’ overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like “crucial” and “delve.” In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free."

Friday, March 28, 2025

ChatGPT's new image generator blurs copyright lines; Axios, March 28, 2025

 Ina Fried, Axios; ChatGPT's new image generator blurs copyright lines

"AI image generators aren't new, but the one OpenAI handed to ChatGPT's legions of users this week is more powerful and has fewer guardrails than its predecessors — opening up a range of uses that are both tantalizing and terrifying."

Thursday, March 27, 2025

Judge allows 'New York Times' copyright case against OpenAI to go forward; NPR, March 27, 2025

 , NPR ; Judge allows 'New York Times' copyright case against OpenAI to go forward

"A federal judge on Wednesday rejected OpenAI's request to toss out a copyright lawsuit from The New York Times that alleges that the tech company exploited the newspaper's content without permission or payment.

In an order allowing the lawsuit to go forward, Judge Sidney Stein, of the Southern District of New York, narrowed the scope of the lawsuit but allowed the case's main copyright infringement claims to go forward.

Stein did not immediately release an opinion but promised one would come "expeditiously."

The decision is a victory for the newspaper, which has joined forces with other publishers, including The New York Daily News and the Center for Investigative Reporting, to challenge the way that OpenAI collected vast amounts of data from the web to train its popular artificial intelligence service, ChatGPT."

Saturday, February 8, 2025

OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too; The Conversation, February 4, 2025

 Senior Lecturer in Natural Language Processing, The University of Melbourne, The University of Melbourne , Lecturer in Cybersecurity, The University of Melbourne, The Conversation; OpenAI says DeepSeek ‘inappropriately’ copied ChatGPT – but it’s facing copyright claims too

"Within days, DeepSeek’s app surpassed ChatGPT in new downloads and set stock prices of tech companies in the United States tumbling. It also led OpenAI to claim that its Chinese rival had effectively pilfered some of the crown jewels from OpenAI’s models to build its own. 

In a statement to the New York Times, the company said: 

We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more. We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here.

The Conversation approached DeepSeek for comment, but it did not respond.

But even if DeepSeek copied – or, in scientific parlance, “distilled” – at least some of ChatGPT to build R1, it’s worth remembering that OpenAI also stands accused of disrespecting intellectual property while developing its models."

Wednesday, December 25, 2024

Should you trust an AI-assisted doctor? I visited one to see.; The Washington Post, December 25, 2024

, The Washington Post; Should you trust an AI-assisted doctor? I visited one to see.

"The harm of generative AI — notorious for “hallucinations” — producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an “inappropriate” answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice.

Another study found that chatbots can echo doctors’ own biases, such as the racist assumption that Black people can tolerate more pain than White people. Transcription software, too, has been shown to invent things that no one ever said."

Thursday, November 21, 2024

AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA; The Los Angeles Loyolan, November 18, 2024

Coleman Standifer, asst. managing editor; Grace McNeill, asst. managing editor , The Los Angeles Loyolan; AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA

"The Bellarmine College of Liberal Arts (BCLA) is taking steps to further educate students on artificial intelligence (AI) through the development of an “artificial intelligence, ethics and society," spearheaded by an AI task force. This proposed addition comes two years after the widespread adoption of OpenAI's ChatGPT in classrooms.

Prior to stepping into his role as the new dean of BCLA, Richard Fox, Ph.D., surveyed BCLA’s 175 faculty about how the college could best support their teaching. Among the top three responses from faculty were concerns about navigating AI in the classroom, Fox told the Loyolan.

As of now, BCLA has no college-wide policy on AI usage and allows instructors determine how AI is — or is not — utilized in the classroom.

“We usually don't dictate how people teach. That is the essence of academic freedom," said Fox. “What I want to make sure we're doing is we're preparing students to enter a world where they have these myriad different expectations on writing from their faculty members.”

Headed by Roberto Dell’Oro, Ph.D., professor of theological studies and director of the Bioethics Institute, the task force met over the summer and culminated in a proposal for a minor in BCLA. The proposal — which Dell'Oro sent to the Loyolan— was delivered to Fox in August and now awaits a formal proposal to be drawn up before approval, according to Dell’Oro.

The minor must then be approved by the Academic Planning and Review Committee (ARPC), a committee tasked with advising Provost Thomas Poon, Ph.D., on evaluating proposals for new programs.

According to the proposal, the proposed minor aims “to raise awareness about the implications of AI technologies, emphasize the importance of ethical considerations in its development and promote interdisciplinary research at the intersection of AI, ethics, and society.

The minor — if approved by the APRC — would have “four or five classes,” with the possibility of having an introductory course taught by faculty in the Seaver College of Science and Engineering, according to the proposal.

Most of the sample courses in the proposal include classes rooted in philosophy and ethics, such as, “AI, Robots, and the Philosophy of the Person,” “Could Robots Have Rights?” and “Introduction to Bioethics.” According to Dell’Oro, the hope is to have courses available for enrollment by Fall 2025."

Wednesday, November 20, 2024

Indian news agency sues OpenAI alleging copyright infringement; TechCrunch, November 18, 2024

 Manish Singh, TechCrunch; Indian news agency sues OpenAI alleging copyright infringement

"One of India’s largest news agencies, Asian News International (ANI), has sued OpenAI in a case that could set a precedent for how AI companies use copyrighted news content in the world’s most populous nation.

Asian News International filed a 287-page lawsuit in the Delhi High Court on Monday, alleging the AI company illegally used its content to train its AI models and generated false information attributed to the news agency. The case marks the first time an Indian media organization has taken legal action against OpenAI over copyright claims.

During Tuesday’s hearing, Justice Amit Bansal issued a summons to OpenAI after the company confirmed it had already ensured that ChatGPT wasn’t accessing ANI’s website. The bench said that it was not inclined to grant an injunction order on Tuesday, as the case required a detailed hearing for being a “complex issue.”

The next hearing is scheduled to be held in January."

Tuesday, October 15, 2024

This threat hunter chases U.S. foes exploiting AI to sway the election; The Washington Post, October 13, 2024

, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election

"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.

That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.

So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."

Friday, October 11, 2024

Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024

   , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room

"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.

That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.

The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.

Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Saturday, August 31, 2024

ChatGPT Spirituality: Connection or Correction?; Geez, Spring 2024 Issue: February 27, 2024

Rob Saler, Geez ; ChatGPT Spirituality: Connection or Correction?

"Earlier this year, I was at an academic conference sitting with friends at a table. This was around the time that OpenAI technology – specifically ChatGPT – was beginning to make waves in the classroom. Everyone was wondering how to adapt to the new technology. Even at that early point, differentiated viewpoints ranged from incorporation (“we can teach students to use it well as part of the curriculum of the future”) to outright resistance (“I am going back to oral exams and blue book written in-class tests”).

During the conversation, a very intelligent friend casually remarked that she recently began using ChatGPT for therapy – not emergency therapeutic intervention, but more like life coaching and as a sounding board for vocational discernment. Because we all respected her sincerity and intellect, several of us (including me) suppressed our immediate shock and listened as she laid out a very compelling case for ChatGPT as a therapy supplement – and perhaps, in the case of those who cannot or choose not to afford sessions with a human therapist, a therapy substitute. ChapGPT is free (assuming one has internet), available 24/7, shapeable to one’s own interests over time, (presumably) confidential, etc…

In my teaching on AI and technology throughout the last semester, I used this example with theology students (some of whom are also receiving licensure as therapists) as a way of pressing them to examine their own assumptions about AI – and then, by extension, their own assumptions about ontology. If the gut-level reaction to ChatGPT therapy is that it is not “real,” then – in Matrix-esque fashion – we are called to ask how we should define “real.” If a person has genuine insights or intense spiritual experiences engaging in vocational discernment with a technology that can instantaneously generate increasingly relevant responses to prompts, then what is the locus of reality that is missing?"

Monday, August 12, 2024

Artificial Intelligence in the pulpit: a church service written entirely by AI; United Church of Christ, July 16, 2024

 , United Church of Christ; Artificial Intelligence in the pulpit: a church service written entirely by AI

"Would you attend a church service if you knew that it was written entirely by an Artificial Intelligence (AI) program? What would your thoughts and feelings be about this use of AI?

That’s exactly what the Rev. Dwight Lee Wolter wanted to know — and he let his church members at the Congregational Church of Patchogue on Long Island, New York, know that was what he was intending to do on Sunday, July 14. He planned a service that included a call to worship, invocation, pastoral prayer, scripture reading, sermon, hymns, prelude, postlude and benediction with the use of ChatGPT. ChatGPT is a free AI program developed by OpenAI, an Artificial Intelligence research company and released in 2022.

Taking fear and anger out of exploration

“My purpose is to take the fear and anger out of AI exploration and replace it with curiosity, flexibility and open-mindfulness,” said Wolter. “If, as widely claimed, churches need to adapt to survive, we might not recognize the church in 20 years if we could see it now; then AI will be a part of the church of the future. No matter what we presently think of it, it will be present in the future doing a lot of the thinking for us.”...

Wolter intends to follow up Sunday’s service with a reflection about how it went. On July 21, he will give a sermon about AI, with people offering input about the AI service. “We will discuss their reactions, feelings, thoughts, likes and dislikes, concerns and questions.” Wolter will follow with his synopsis sharing the benefits, criticisms, fears and concerns of AI...

Wolter believes we need to “disarm contempt prior to investigation,” when it comes to things like Artificial Intelligence. “AI is not going anywhere. It’s a tool–and with a shortage of clergy, money and volunteers, we will continue to rely on it.”"