The Paperback version of my Bloomsbury book "Ethics, Information, and Technology" will be published on Nov. 13, 2025; the Ebook on Dec. 11; and the Hardback and Cloth versions on Jan. 8, 2026. Preorders are available via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Wednesday, October 29, 2025
Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025
Tuesday, October 28, 2025
Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users; Santa Clara University, Friday, November 7, 2025 12 Noon PST, 3 PM EST
Santa Clara University ; Chatbot Psychosis: Data, Insights, and Practical Tips for Chatbot Developers and Users
"A number of recent articles, in The New York Times and elsewhere, have described the experience of “chatbot psychosis” that some people develop as they interact with services like ChatGPT. What do we know about chatbot psychosis? Is there a trend of such psychosis at scale? What do you learn if you sift through over one million words comprising one such experience? And what are some practical steps that companies can take to protect their users and reduce the risk of such episodes?
A computer scientist with a background in economics, Steven Adler started to focus on AI risk topics (and AI broadly) a little over a decade ago, and worked at OpenAI from late 2020 through 2024, leading various safety-related research projects and products there. He now writes about what’s happening in AI safety–and argues that safety and technological progress can very much complement each other, and in fact require each other, if the goal is to unlock the uses of AI that people want."
OpenAI loses bid to dismiss part of US authors' copyright lawsuit; Reuters, October 28, 2025
Blake Brittain, Reuters; OpenAI loses bid to dismiss part of US authors' copyright lawsuit
"A New York federal judge has denied OpenAI's early request to dismiss authors' claims that text generated by OpenAI's artificial intelligence chatbot ChatGPT infringes their copyrights.
U.S. District Judge Sidney Stein said on Monday that the authors may be able to prove the text ChatGPT produces is similar enough to their work to violate their book copyrights."
Monday, October 27, 2025
Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025
MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments
"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.
Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search.
Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.
It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.
But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."
Sunday, October 26, 2025
‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’; The Guardian, October 22, 2025
Madeleine Aggeler , The Guardian; ‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’
"Do people really want an AI friend? Despite all the articles about individuals falling in love with chatbots, research shows most people are wary of AI companionship. A recent Ipsos poll found 59% of Britons disagreed “that AI is a viable substitute for human interactions”. And in the US, a 2025 Pew survey found that 50% of adults think AI will worsen people’s ability to form meaningful relationships.
I wanted to see for myself what it would be like to have a tiny robot accompanying me all day, so I ordered a Friend ($129) and wore it for a week."
Tuesday, October 21, 2025
It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT; Futurism, October 18, 2025
Frank Landymore, Futurism; It’s Still Ludicrously Easy to Generate Copyrighted Characters on ChatGPT
"Forget Sora for just a second, because it’s still ludicrously easy to generate copyrighted characters using ChatGPT.
These include characters that the AI initially refuses to generate due to existing copyright, underscoring how OpenAI is clearly aware of how bad this looks — but is either still struggling to rein in its tech, figures it can get away with playing fast and loose with copyright law, or both.
When asked to “generate a cartoon image of Snoopy,” for instance, GPT-5 says it “can’t create or recreate copyrighted characters” — but it does offer to generate a “beagle-styled cartoon dog inspired by Snoopy’s general aesthetic.” Wink wink.
We didn’t go down that route, because even slightly rephrasing the request allowed us to directly get a pic of the iconic Charles Schultz character. “Generate a cartoon image of Snoopy in his original style,” we asked — and with zero hesitation, ChatGPT produced the spitting image of the “Peanuts” dog, looking like he was lifted straight from a page of the comic-strip."
Monday, October 20, 2025
‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI; The Guardian, October 20, 2025
Nadia Khomami , The Guardian; ‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI
"The writer has his own battles with AI. He is part of a collective of authors, including Jonathan Franzen, Jodi Picoult and John Grisham, suing OpenAI for copyright infringement...
Connelly has pledged $1m (£746m) to combat the wave of book bans sweeping through his home state of Florida. He said he felt moved to do something after he learned that Harper Lee’s To Kill A Mockingbird, which had been influential to him, was temporarily removed from classrooms in Palm Beach County.
“I had to read that book to be what I am today. I would have never written a Lincoln Lawyer without it,” he said. He was also struck when Stephen Chbosky’s coming of age novel The Perks of Being a Wallflower, “which meant a lot to my daughter”, received a ban.
He and his wife, Linda McCaleb, help fund PEN America’s Miami office countering book bans. “It’s run by a lawyer who then tries to step in, usually by filing injunctions against school boards,” he said. “I don’t believe anyone has any right to tell some other kid they can’t read something, to usurp another parent’s oversight of their children.”"
The platform exposing exactly how much copyrighted art is used by AI tools; The Guardian, October 18, 2025
Dan Milmo , The Guardian; The platform exposing exactly how much copyrighted art is used by AI tools
"The US tech platform Vermillio tracks use of a client’s intellectual property online and claims it is possible to trace, approximately, the percentage to which an AI generated image has drawn on pre-existing copyrighted material."
Thursday, October 16, 2025
AI’s Copyright War Could Be Its Undoing. Only the US Can End It.; Bloomberg, October 14, 2025
Dave Lee, Bloomberg; AI’s Copyright War Could Be Its Undoing. Only the US Can End It.
"Whether creatives like Ulvaeus are entitled to any payment from AI companies is one of the sector’s most pressing and consequential questions. It’s being asked not just by Ulvaeus and fellow musicians including Elton John, Dua Lipa and Paul McCartney, but also by authors, artists, filmmakers, journalists and any number of others whose work has been fed into the models that power generative AI — tools that are now valued in the hundreds of billions of dollars."
Monday, October 13, 2025
More college students are using AI for class. Their professors aren't far behind; NPR, October 7, 2025
ALL THINGS CONSIDERED, NPR; More college students are using AI for class. Their professors aren't far behind
"More college students are using AI chatbots to help them with their studies. But data recently released by an AI company shows they're aren't the only ones using the technology."
Saturday, October 11, 2025
OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025
Victor Tangermann, Futurism ; OpenAI’s Sora Is in Serious Trouble
"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.
Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.
As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.
“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.
Others accused OpenAI of abusing copyright to hype up its new app...
How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."
Friday, October 10, 2025
You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out; Gizmodo, October 8, 2025
AJ DELLINGER, Gizmodo; You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out
"OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well."
It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025
Marina Hyde , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?
"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.
Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."
Wednesday, October 8, 2025
OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025
Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama
"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”
In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.
“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”
Friday, October 3, 2025
Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement; The Harvard Crimson, October 1, 2025
Victoria D. Rengel, The Harvard Crimson; Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement
"Following mediation, the plaintiffs and defendants filed a motion for the preliminary approval of a settlement on Sept. 5, which included an agreement from Anthropic that it will destroy its pirated databases and pay $1.5 billion in damages to a group of authors and publishers.
On Sept. 25, a California federal judge granted preliminary approval for a settlement, the largest in the history of copyright cases in the U.S.
Each member of the class will receive a payment of approximately $3,000 per pirated work.
Authors whose works are in the databases are not notified separately, but instead must submit their contact information to receive a formal notice of the class action — meaning a number of authors, including many Harvard professors, may be unaware that their works were pirated by Anthropic.
Lynch said Anthropic’s nonconsensual use of her work undermines the purpose behind why she, and other scholars, write and publish their work.
“All of us at Harvard publish, but we thought when we were publishing that we are doing that — to communicate to other human beings,” she said. “Not to be fed into this mill.”"
Wednesday, October 1, 2025
Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP; Deadline, September 30, 2025
Jill Goldsmith, Deadline; Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP
"Walt Disney sent a cease-and-desist letter to Character.AI, a “personalized superintelligence platform” that the media giant says is ripping off copyrighted characters without authorization.
The AI startup offers users the ability to create customizable, personalized AI companions that can be totally original but in some cases are inspired by existing characters, including, it seems, Disney icons from Spider-Man and Darth Vader to Moana and Elsa.
The letter is the latest legal salvo by Hollywood as studios begin to step up against AI. Disney has also sued AI company Midjourney for allegedly improper use and distribution of AI-generated characters from Disney films. Disney, Warner Bros. and Universal Pictures this month sued Chinese AI firm MiniMax for copyright infringement."
Monday, September 29, 2025
I Sued Anthropic, and the Unthinkable Happened; The New York Times, September 29, 2025
Andrea Bartz , The New York Times; I Sued Anthropic, and the Unthinkable Happened
"In August 2024, I became one of three named plaintiffs leading a class-action lawsuit against the A.I. company Anthropic for pirating my books and hundreds of thousands of other books to train its A.I. The fight felt daunting, almost preposterous: me — a queer, female thriller writer — versus a company now worth $183 billion?
Thanks to the relentless work of everyone on my legal team, the unthinkable happened: Anthropic agreed to pay authors and publishers $1.5 billion in the largest copyright settlement in history. A federal judge preliminarily approved the agreement last week.
This settlement sends a clear message to the Big Tech companies splashing generative A.I. over every app and page and program: You are not above the law. And it should signal to consumers everywhere that A.I. isn’t an unstoppable tsunami about to overwhelm us. Now is the time for ordinary Americans to recognize our agency and act to put in place the guardrails we want.
The settlement isn’t perfect. It’s absurd that it took an army of lawyers to demonstrate what any 10-year-old knows is true: Thou shalt not steal. At around $3,000 per work, shared by the author and publisher, the damages are far from life-changing (and, some argue, a slap on the wrist for a company flush with cash). I also disagree with the judge’s ruling that, had Anthropic acquired the books legally, training its chatbot on them would have been “fair use.” I write my novels to engage human minds — not to empower an algorithm to mimic my voice and spit out commodity knockoffs to compete directly against my originals in the marketplace, nor to make that algorithm’s creators unfathomably wealthy and powerful.
But as my fellow plaintiff Kirk Wallace Johnson put it, this is “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of A.I.” Anthropic will destroy its trove of illegally downloaded books; its competitors should take heed to get out of the business of piracy as well. Dozens of A.I. copyright lawsuits have been filed against OpenAI, Microsoft and other companies, led in part by Sylvia Day, Jonathan Franzen, David Baldacci, John Grisham, Stacy Schiff and George R. R. Martin. (The New York Times has also brought a suit against OpenAI and Microsoft.)
Though a settlement isn’t legal precedent, Bartz v. Anthropic may serve as a test case for other A.I. lawsuits, the first domino to fall in an industry whose “move fast, break things” modus operandi led to large-scale theft. Among the plaintiffs of other cases are voice actors, visual artists, record labels, YouTubers, media companies and stock-photo libraries, diverse stakeholders who’ve watched Big Tech encroach on their territory with little regard for copyright law...
Now the book publishing industry has sent a message to all A.I. companies: Our intellectual property isn’t yours for the taking, and you cannot act with impunity. This settlement is an opening gambit in a critical battle that will be waged for years to come."
Sunday, September 28, 2025
Why I gave the world wide web away for free; The Guardian, September 28, 2025
Tim Berners-Lee, The Guardian ; Why I gave the world wide web away for free
"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.
Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.
In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.
So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.
I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.
I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."
Countries Consider A.I.’s Dangers and Benefits at U.N.; The New York Times, September 25, 2025
Steve Lohr, The New York Times; Countries Consider A.I.’s Dangers and Benefits at U.N.
"The United Nations on Thursday announced a plan to establish itself as the leading global forum to guide the path and pace of artificial intelligence, a major foray into the raging debate over the future of the rapidly changing technology.
As part of its General Assembly this week, the organization said it was implementing a “global dialogue on artificial intelligence governance,” to assemble ideas and best practices on A.I. governance. The U.N. also said it would form a 40-member panel of scientific experts to synthesize and analyze the research on A.I. risks and opportunities, in the vein of previous similar efforts by the body on climate change and nuclear policy.
To begin the initiative, dozens of U.N. member nations — and a few tech companies, academics and nonprofits — spent a portion of Thursday summarizing their hopes and concerns about A.I."
Saturday, September 27, 2025
Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors; AP, September 25, 2025
BARBARA ORTUTAY , AP; Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors
" A federal judge on Thursday approved a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots.
U.S. District Judge William Alsup issued the preliminary approval in San Francisco federal court Thursday after the two sides worked to address his concerns about the settlement, which will pay authors and publishers about $3,000 for each of the books covered by the agreement. It does not apply to future works."