Showing posts with label Judge William Alsup. Show all posts
Showing posts with label Judge William Alsup. Show all posts

Wednesday, August 13, 2025

Judge rejects Anthropic bid to appeal copyright ruling, postpone trial; Reuters, August 12, 2025

  , Reuters; Judge rejects Anthropic bid to appeal copyright ruling, postpone trial

"A federal judge in California has denied a request from Anthropic to immediately appeal a ruling that could place the artificial intelligence company on the hook for billions of dollars in damages for allegedly pirating authors' copyrighted books.

U.S. District Judge William Alsup said on Monday that Anthropic must wait until after a scheduled December jury trial to appeal his decision that the company is not shielded from liability for pirating millions of books to train its AI-powered chatbot Claude."

Saturday, August 9, 2025

AI industry horrified to face largest copyright class action ever certified; Ars Technica, August 8, 2025

 ASHLEY BELANGER, Ars Technica ; AI industry horrified to face largest copyright class action ever certified

"AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.

Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued."

Sunday, July 20, 2025

Judge Rules Class Action Suit Against Anthropic Can Proceed; Publishers Weekly, July 18, 2025

 Jim Milliot , Publishers Weekly; Judge Rules Class Action Suit Against Anthropic Can Proceed

"In a major victory for authors, U.S. District Judge William Alsup ruled July 17 that three writers suing Anthropic for copyright infringement can represent all other authors whose books the AI company allegedly pirated to train its AI model as part of a class action lawsuit.

In late June, Alsup of the Northern District of California, ruled in Bartz v. Anthropic that the AI company's training of its Claude LLMs on authors' works was "exceedingly transformative," and therefore protected by fair use. However, Alsup also determined that the company's practice of downloading pirated books from sites including Books3, Library Genesis, and Pirate Library Mirror (PiLiMi) to build a permanent digital library was not covered by fair use.

Alsup’s most recent ruling follows an amended complaint from the authors looking to certify classes of copyright owners in a “Pirated Books Class” and in a “Scanned Books Class.” In his decision, Alsup certified only a LibGen and PiLiMi Pirated Books Class, writing that “this class is limited to actual or beneficial owners of timely registered copyrights in ISBN/ASIN-bearing books downloaded by Anthropic from these two pirate libraries.”

Alsup stressed that “the class is not limited to authors or author-like entities,” explaining that “a key point is to cover everyone who owns the specific copyright interest in play, the right to make copies, either as the actual or as the beneficial owner.” Later in his decision, Alsup makes it clear who is covered by the ruling: “A beneficial owner...is someone like an author who receives royalties from any publisher’s revenues or recoveries from the right to make copies. Yes, the legal owner might be the publisher but the author has a definite stake in the royalties, so the author has standing to sue. And, each stands to benefit from the copyright enforcement at the core of our case however they then divide the benefit.”"

US authors suing Anthropic can band together in copyright class action, judge rules; Reuters, July 17, 2025

  , Reuters; US authors suing Anthropic can band together in copyright class action, judge rules

"A California federal judge ruled on Thursday that three authors suing artificial intelligence startup Anthropic for copyright infringement can represent writers nationwide whose books Anthropic allegedly pirated to train its AI system.

U.S. District Judge William Alsup said the authors can bring a class action on behalf of all U.S. writers whose works Anthropic allegedly downloaded from "pirate libraries" LibGen and PiLiMi to create a repository of millions of books in 2021 and 2022."

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

 CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

Anthropic wins key US ruling on AI training in authors' copyright lawsuit; Reuters, June 24, 2025

 , Reuters; Anthropic wins key US ruling on AI training in authors' copyright lawsuit

 "A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law.

Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made "fair use" of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model.

Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a "central library" infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement."

Saturday, May 24, 2025

Judge Hints Anthropic’s AI Training on Books Is Fair Use; Bloomberg Law, May 22, 2025

 

, Bloomberg Law; Judge Hints Anthropic’s AI Training on Books Is Fair Use

"A California federal judge is leaning toward finding Anthropic PBC violated copyright law when it made initial copies of pirated books, but that its subsequent uses to train their generative AI models qualify as fair use.

“I’m inclined to say they did violate the Copyright Act but the subsequent uses were fair use,” Judge William Alsup said Thursday during a hearing in San Francisco. “That’s kind of the way I’m leaning right now,” he said, but concluded the 90-minute hearing by clarifying that his decision isn’t final. “Sometimes I say that and change my mind."...

The first judge to rule will provide a window into how federal courts interpret the fair use argument for training generative artificial intelligence models with copyrighted materials. A decision against Anthropic could disrupt the billion-dollar business model behind many AI companies, which rely on the belief that training with unlicensed copyrighted content doesn’t violate the law."

Wednesday, August 19, 2020

Self-Driving to Federal Prison: The Trade Secret Theft Saga of Anthony Levandowski Continues; Lexology, August 13, 2020

Seyfarth Shaw LLP - Robert Milligan and Darren W. DummitSelf-Driving to Federal Prison: The Trade Secret Theft Saga of Anthony Levandowski Continues

"Judge Aslup, while steadfastly respectful of Levandowski as a good person and as a brilliant man who the world would learn a lot listening to, nevertheless found prison time to be the best available deterrent to engineers and employees privy to trade secrets worth billions of dollars to competitors: “You’re giving the green light to every future engineer to steal trade secrets,” he told Levandowski’s attorneys. “Prison time is the answer to that.” To further underscore the importance of deterring similar behavior in the high stakes tech world, Judge Aslup required Levandowski to give the aforementioned public speeches describing how he went to prison."