Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Wednesday, October 8, 2025

OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025

 Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama

"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”

In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.

“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”

Friday, October 3, 2025

Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement; The Harvard Crimson, October 1, 2025

 Victoria D. Rengel, The Harvard Crimson;  Harvard Professors May Be Eligible for Payments in $1.5 Billion AI Copyright Settlement

"Following mediation, the plaintiffs and defendants filed a motion for the preliminary approval of a settlement on Sept. 5, which included an agreement from Anthropic that it will destroy its pirated databases and pay $1.5 billion in damages to a group of authors and publishers.

On Sept. 25, a California federal judge granted preliminary approval for a settlement, the largest in the history of copyright cases in the U.S.

Each member of the class will receive a payment of approximately $3,000 per pirated work.

Authors whose works are in the databases are not notified separately, but instead must submit their contact information to receive a formal notice of the class action — meaning a number of authors, including many Harvard professors, may be unaware that their works were pirated by Anthropic.

Lynch said Anthropic’s nonconsensual use of her work undermines the purpose behind why she, and other scholars, write and publish their work.

“All of us at Harvard publish, but we thought when we were publishing that we are doing that — to communicate to other human beings,” she said. “Not to be fed into this mill.”"

Wednesday, October 1, 2025

Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP; Deadline, September 30, 2025

 Jill Goldsmith, Deadline; Disney Sends Cease And Desist Letter To Character.ai For Copyright Infringement As Studios Move To Protect IP

"Walt Disney sent a cease-and-desist letter to Character.AI, a “personalized superintelligence platform” that the media giant says is ripping off copyrighted characters without authorization.

The AI startup offers users the ability to create customizable, personalized AI companions that can be totally original but in some cases are inspired by existing characters, including, it seems, Disney icons from Spider-Man and Darth Vader to Moana and Elsa.

The letter is the latest legal salvo by Hollywood as studios begin to step up against AI. Disney has also sued AI company Midjourney for allegedly improper use and distribution of AI-generated characters from Disney films. Disney, Warner Bros. and Universal Pictures this month sued Chinese AI firm MiniMax for copyright infringement."

Monday, September 29, 2025

I Sued Anthropic, and the Unthinkable Happened; The New York Times, September 29, 2025

 , The New York Times; I Sued Anthropic, and the Unthinkable Happened

"In August 2024, I became one of three named plaintiffs leading a class-action lawsuit against the A.I. company Anthropic for pirating my books and hundreds of thousands of other books to train its A.I. The fight felt daunting, almost preposterous: me — a queer, female thriller writer — versus a company now worth $183 billion?

Thanks to the relentless work of everyone on my legal team, the unthinkable happened: Anthropic agreed to pay authors and publishers $1.5 billion in the largest copyright settlement in history. A federal judge preliminarily approved the agreement last week.

This settlement sends a clear message to the Big Tech companies splashing generative A.I. over every app and page and program: You are not above the law. And it should signal to consumers everywhere that A.I. isn’t an unstoppable tsunami about to overwhelm us. Now is the time for ordinary Americans to recognize our agency and act to put in place the guardrails we want.

The settlement isn’t perfect. It’s absurd that it took an army of lawyers to demonstrate what any 10-year-old knows is true: Thou shalt not steal. At around $3,000 per work, shared by the author and publisher, the damages are far from life-changing (and, some argue, a slap on the wrist for a company flush with cash). I also disagree with the judge’s ruling that, had Anthropic acquired the books legally, training its chatbot on them would have been “fair use.” I write my novels to engage human minds — not to empower an algorithm to mimic my voice and spit out commodity knockoffs to compete directly against my originals in the marketplace, nor to make that algorithm’s creators unfathomably wealthy and powerful.

But as my fellow plaintiff Kirk Wallace Johnson put it, this is “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of A.I.” Anthropic will destroy its trove of illegally downloaded books; its competitors should take heed to get out of the business of piracy as well. Dozens of A.I. copyright lawsuits have been filed against OpenAI, Microsoft and other companies, led in part by Sylvia Day, Jonathan Franzen, David Baldacci, John Grisham, Stacy Schiff and George R. R. Martin. (The New York Times has also brought a suit against OpenAI and Microsoft.)

Though a settlement isn’t legal precedent, Bartz v. Anthropic may serve as a test case for other A.I. lawsuits, the first domino to fall in an industry whose “move fast, break things” modus operandi led to large-scale theft. Among the plaintiffs of other cases are voice actors, visual artists, record labels, YouTubers, media companies and stock-photo libraries, diverse stakeholders who’ve watched Big Tech encroach on their territory with little regard for copyright law...

Now the book publishing industry has sent a message to all A.I. companies: Our intellectual property isn’t yours for the taking, and you cannot act with impunity. This settlement is an opening gambit in a critical battle that will be waged for years to come."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

, The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Countries Consider A.I.’s Dangers and Benefits at U.N.; The New York Times, September 25, 2025

 , The New York Times; Countries Consider A.I.’s Dangers and Benefits at U.N.

"The United Nations on Thursday announced a plan to establish itself as the leading global forum to guide the path and pace of artificial intelligence, a major foray into the raging debate over the future of the rapidly changing technology.

As part of its General Assembly this week, the organization said it was implementing a “global dialogue on artificial intelligence governance,” to assemble ideas and best practices on A.I. governance. The U.N. also said it would form a 40-member panel of scientific experts to synthesize and analyze the research on A.I. risks and opportunities, in the vein of previous similar efforts by the body on climate change and nuclear policy.

To begin the initiative, dozens of U.N. member nations — and a few tech companies, academics and nonprofits — spent a portion of Thursday summarizing their hopes and concerns about A.I."

Saturday, September 27, 2025

Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors; AP, September 25, 2025

 BARBARA ORTUTAY , AP; Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors

" A federal judge on Thursday approved a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots.

U.S. District Judge William Alsup issued the preliminary approval in San Francisco federal court Thursday after the two sides worked to address his concerns about the settlement, which will pay authors and publishers about $3,000 for each of the books covered by the agreement. It does not apply to future works."

Tuesday, September 23, 2025

Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers; TechCrunch, September 5, 2025

Amanda Silberling , TechCrunch; Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers

"But writers aren’t getting this settlement because their work was fed to an AI — this is just a costly slap on the wrist for Anthropic, a company that just raised another $13 billion, because it illegally downloaded books instead of buying them.

In June, federal judge William Alsup sided with Anthropic and ruled that it is, indeed, legal to train AI on copyrighted material. The judge argues that this use case is “transformative” enough to be protected by the fair use doctrine, a carve-out of copyright law that hasn’t been updated since 1976.

“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” the judge said.

It was the piracy — not the AI training — that moved Judge Alsup to bring the case to trial, but with Anthropic’s settlement, a trial is no longer necessary."

Documents offer rare insight on Ice’s close relationship with Palantir; The Guardian, September 22, 2025

 , The Guardian; Documents offer rare insight on Ice’s close relationship with Palantir

"Over the past decade, the US Immigration and Customs Enforcement agency (Ice) has amassed millions of data points that it uses to identify and track its targets – from social media posts to location history and, most recently, tax information.

And there’s been one, multibillion-dollar tech company particularly instrumental in enabling Ice to put all that data to work: Palantir, the data analytics firm co-founded by Peter Thiel, the rightwing mega-donor and tech investor.

For years, little was known about how Ice uses Palantir’s technology. The company has consistently described itself as a “data processor” and says it does not play an active role in any of its customers’ data collection efforts or what clients do with that information.

Now, a cache of internal Ice documents – including hundreds of pages of emails between Ice and Palantir, as well as training manuals, and reports on the use of Palantir products – offer some of the first real-world examples of how Ice has used Palantir in its investigations and during on-the-ground enforcement operations.

The documents, which were obtained by immigrant legal rights group Just Futures Law through a Freedom of Information Act request and reviewed by the Guardian, largely cover Palantir’s contract with Homeland Security Investigations (HSI), the investigative arm of Ice that is responsible for stopping the “illegal movement of people, goods, money, contraband, weapons and sensitive technology”."

Monday, September 22, 2025

Can AI chatbots trigger psychosis? What the science says; Nature, September 18, 2025

Rachel Fieldhouse, Nature; Can AI chatbots trigger psychosis? What the science says

 "Accounts of people developing psychosis — which renders them unable to distinguish between what is and is not reality — after interacting with generative artificial intelligence (AI) chatbots have increased in the past few months.

At least 17 people have been reported to have developed psychosis, according to a preprint posted online last month1. After engaging with chatbots such as ChatGPT and Microsoft Copilot, some of these people experienced spiritual awakenings or uncovered what they thought were conspiracies.

So far, there has been little research into this rare phenomenon, called AI psychosis, and most of what we know comes from individual instances. Nature explores the emerging theories and evidence, and what AI companies are doing about the problem."

Monday, September 8, 2025

Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up; Intellectual Property & Technology Law Journal, October 2025

 Anna B. Naydonov, Mark Davies and Jules Lee, Intellectual Property &Technology Law Journal; Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up

"Probably no intellectual property (IP) topic in the last several years has gotten more attention than the litigation over the use of the claimed copyrighted content in training artificial intelligence (AI) models.The issue of whether fair use applies to save the day for AI developers is rightfully deemed critical, if not existential, for AI innovation. But whether class relief – and the astronomical damages that may come with it – is available in these cases is a question of no less significance."

Saturday, September 6, 2025

Big Questions About AI and the Church Video; August 25, 2025

Big Questions About AI and the Church Video

Kip Currier: This Big Questions About AI and the Church video (1:12:14) was created by the members of my cohort and me (Cohort 7). Our cohort emanated from the groundbreaking August 2024 ecumenical AI & The Church Summit in Seattle that we all attended.

Perhaps raising more questions than providing answers, the video's aim is to encourage reflection and discussion of the many-faceted issues and concerns at the nexus of AI, faith communities, and our broader societies.

Many thanks to our cohort member Rev. Dr. Andy P. Morgan for spearheading, synthesizing, and uploading this video to YouTube. 

Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit; NPR, September 5, 2025

 , NPR; Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit

"In one of the largest copyright settlements involving generative artificial intelligence, Anthropic AI, a leading company in the generative AI space, has agreed to pay $1.5 billion to settle a copyright infringement lawsuit brought by a group of authors.

If the court approves the settlement, Anthropic will compensate authors around $3,000 for each of the estimated 500,000 books covered by the settlement.

The settlement, which U.S. Senior District Judge William Alsup in San Francisco will consider approving next week, is in a case that involved the first substantive decision on how fair use applies to generative AI systems. It also suggests an inflection point in the ongoing legal fights between the creative industries and the AI companies accused of illegally using artistic works to train the large language models that underpin their widely-used AI systems.

The fair use doctrine enables copyrighted works to be used by third parties without the copyright holder's consent in some circumstances, such as when illustrating a point in a news article. AI companies trying to make the case for the use of copyrighted works to train their generative AI models commonly invoke fair use. But authors and other creative industry plaintiffs have been pushing back.

"This landmark settlement will be the largest publicly reported copyright recovery in history," the settlement motion states, arguing that it will "provide meaningful compensation" to authors and "set a precedent of AI companies paying for their use of pirated websites."

"This settlement marks the beginning of a necessary evolution toward a legitimate, market-based licensing scheme for training data," said Cecilia Ziniti, a tech industry lawyer and former Ninth Circuit clerk who is not involved in this specific case but has been following it closely. "It's not the end of AI, but the start of a more mature, sustainable ecosystem where creators are compensated, much like how the music industry adapted to digital distribution.""

Friday, August 29, 2025

Medicare Will Require Prior Approval for Certain Procedures; The New York Times, August 28, 2025

Reed Abelson and  , The New York Times; Medicare Will Require Prior Approval for Certain Procedures


[Kip Currier: Does anyone who receives Medicare -- or cares about someone who does -- really think that letting AI make "prior approvals" for any Medicare procedures is a good thing?

Read the entire article, but just the money quote below should give any thinking person heart palpitations about this AI Medicare pilot project's numerous red flags and conflicts of interest...]


[Excerpt]

"The A.I. companies selected to oversee the program would have a strong financial incentive to deny claims. Medicare plans to pay them a share of the savings generated from rejections."

Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors; Wired, August 26, 2025

 Kate Knobs, Wired ; Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

"ANTHROPIC HAS REACHED a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in one of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court."

ChatGPT offered bomb recipes and hacking tips during safety tests; The Guardian, August 28, 2025

 , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests

"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.

The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."

Thursday, August 28, 2025

Anthropic’s surprise settlement adds new wrinkle in AI copyright war; Reuters, August 27, 2025

, Reuters; Anthropic’s surprise settlement adds new wrinkle in AI copyright war

"Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft and Meta Platforms.

Amazon-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings."

Monday, August 25, 2025

How ChatGPT Surprised Me; The New York Times, August 24, 2025

, The New York Times ; How ChatGPT Surprised Me

"In some corners of the internet — I’m looking at you, Bluesky — it’s become gauche to react to A.I. with anything save dismissiveness or anger. The anger I understand, and parts of it I share. I am not comfortable with these companies becoming astonishingly rich off the entire available body of human knowledge. Yes, we all build on what came before us. No company founded today is free of debt to the inventors and innovators who preceded it. But there is something different about inhaling the existing corpus of human knowledge, algorithmically transforming it into predictive text generation and selling it back to us. (I should note that The New York Times is suing OpenAI and its partner Microsoft for copyright infringement, claims both companies have denied.)

Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built...

As the now-cliché line goes, this is the worst A.I. will ever be, and this is the fewest number of users it will have. The dependence of humans on artificial intelligence will only grow, with unknowable consequences both for human society and for individual human beings. What will constant access to these systems mean for the personalities of the first generation to use them starting in childhood? We truly have no idea. My children are in that generation, and the experiment we are about to run on them scares me."

Saturday, August 23, 2025

Watering down Australia’s AI copyright laws would sacrifice writers’ livelihoods to ‘brogrammers’; The Guardian, August 11, 2025

 Tracey Spicer, The Guardian; Watering down Australia’s AI copyright laws would sacrifice writers’ livelihoods to ‘brogrammers’

"My latest book, which is about artificial intelligence discriminating against people from marginalised communities, was composed on an Apple Mac.

Whatever the form of recording the first rough draft of history, one thing remains the same: they are very human stories – stories that change the way we think about the world.

A society is the sum of the stories it tells. When stories, poems or books are “scraped”, what does this really mean?

The definition of scraping is to “drag or pull a hard or sharp implement across (a surface or object) so as to remove dirt or other matter”.

A long way from Brisbane or Bangladesh, in the rarefied climes of Silicon Valley, scrapers are removing our stories as if they are dirt.

These stories are fed into the machines of the great god: generative AI. But the outputs – their creations – are flatter, less human, more homogenised. ChatGPT tells tales set in metropolitan areas in the global north; of young, cishet men and people living without disability.

We lose the stories of lesser-known characters in remote parts of the world, eroding our understanding of the messy experience of being human.

Where will we find the stories of 64-year-old John from Traralgon, who died from asbestosis? Or seven-year-old Raha from Jaipur, whose future is a “choice” between marriage at the age of 12 and sexual exploitation?

OpenAI’s creations are not the “machines of loving grace” envisioned in the 1967 poem by Richard Brautigan, where he dreams of a “cybernetic meadow”.

Scraping is a venal money grab by oligarchs who are – incidentally – scrambling to protect their own intellectual property during an AI arms race.

The code behind ChatGPT is protected by copyright, which is considered to be a literary work. (I don’t know whether to laugh or cry.)

Meta has already stolen the work of thousands of Australian writers.

Now, our own Productivity Commission is considering weakening our Copyright Act to include an exemption for text and data mining, which may well put us out of business.

In its response, The Australia Institute uses the analogy of a car: “Imagine grabbing the keys for a rental car and just driving around for a while without paying to hire it or filling in any paperwork. Then imagine that instead of being prosecuted for breaking the law, the government changed the law to make driving around in a rental car legal.”

It’s more like taking a piece out of someone’s soul, chucking it into a machine and making it into something entirely different. Ugly. Inhuman.

The commission’s report seems to be an absurdist text. The argument for watering down copyright is that it will lead to more innovation. But the explicit purpose of the Copyright Act is to protect innovation, in the form of creative endeavour.

Our work is being devalued, dismissed and destroyed; our livelihoods demolished.

In this age of techno-capitalism, it appears the only worthwhile innovation is being built by the “brogrammers”.

US companies are pinching Australian content, using it to train their models, then selling it back to us. It’s an extractive industry: neocolonialism, writ large."

Wednesday, August 13, 2025

Judge rejects Anthropic bid to appeal copyright ruling, postpone trial; Reuters, August 12, 2025

  , Reuters; Judge rejects Anthropic bid to appeal copyright ruling, postpone trial

"A federal judge in California has denied a request from Anthropic to immediately appeal a ruling that could place the artificial intelligence company on the hook for billions of dollars in damages for allegedly pirating authors' copyrighted books.

U.S. District Judge William Alsup said on Monday that Anthropic must wait until after a scheduled December jury trial to appeal his decision that the company is not shielded from liability for pirating millions of books to train its AI-powered chatbot Claude."