Showing posts with label fair use. Show all posts
Showing posts with label fair use. Show all posts

Sunday, February 16, 2025

Court filings show Meta paused efforts to license books for AI training; TechCrunch, February 14, 3025

 Kyle Wiggers, TechCrunch; Court filings show Meta paused efforts to license books for AI training

"According to one transcript, Sy Choudhury, who leads Meta’s AI partnership initiatives, said that Meta’s outreach to various publishers was met with “very slow uptake in engagement and interest.”

“I don’t recall the entire list, but I remember we had made a long list from initially scouring the Internet of top publishers, et cetera,” Choudhury said, per the transcript, “and we didn’t get contact and feedback from — from a lot of our cold call outreaches to try to establish contact.”

Choudhury added, “There were a few, like, that did, you know, engage, but not many.”

According to the court transcripts, Meta paused certain AI-related book licensing efforts in early April 2023 after encountering “timing” and other logistical setbacks. Choudhury said some publishers, in particular fiction book publishers, turned out to not in fact have the rights to the content that Meta was considering licensing, per a transcript.

“I’d like to point out that the — in the fiction category, we quickly learned from the business development team that most of the publishers we were talking to, they themselves were representing that they did not have, actually, the rights to license the data to us,” Choudhury said. “And so it would take a long time to engage with all their authors.”"

Wednesday, February 12, 2025

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law; The National Law Review, February 11, 2025

 Joseph A. MeckesJoseph Grasser of Squire Patton Boggs (US) LLP   - Global IP and Technology Law Blog,  The National Law Review; Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

"In what may turn out to be an influential decision, Judge Stephanos Bibas ruled as a matter of law in Thompson Reuters v. Ross Intelligence that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thompson Reuters’ copyrights as a matter of law but that the copying is not fair use. Judge Bibas had previously ruled that infringement and fair use were issues for the jury but changed his mind: “A smart man knows when he is right; a wise man knows when he is wrong.”

At issue in the case was whether Ross Intelligence directly infringed Thompson Reuters’ copyrights in its case law headnotes that are organized by Westlaw’s proprietary Key Number system. Thompson Reuters contended that Ross Intelligence’s contractor copied those headnotes to create “Bulk Memos.” Ross Intelligence used the Bulk Memos to train its competitive AI-powered legal research tool. Judge Bibas ruled that (i) the West headnotes were sufficiently original and creative to be copyrightable, and (ii) some of the Bulk Memos used by Ross were so similar that they infringed as a matter of law...

In other words, even if a work is selected entirely from the public domain, the simple act of selection is enough to give rise to copyright protection."

Tuesday, January 28, 2025

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy; Electronic Frontier Foundation (EFF), January 27, 2025

 KATHARINE TRENDACOSTA, Electronic Frontier Foundation (EFF); It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture:Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek." 

Sunday, January 19, 2025

Congress Must Change Copyright Law for AI | Opinion; Newsweek, January 16, 2025

  Assistant Professor of Business Law, Georgia College and State University , Newsweek; Congress Must Change Copyright Law for AI | Opinion

"Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.

When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.

Consider what's at risk with inaction...

While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.

The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.

The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.

Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act."

Thursday, January 16, 2025

In AI copyright case, Zuckerberg turns to YouTube for his defense; TechCrunch, January 15, 2025

 

, TechCrunch ; In AI copyright case, Zuckerberg turns to YouTube for his defense

"Meta CEO Mark Zuckerberg appears to have used YouTube’s battle to remove pirated content to defend his own company’s use of a data set containing copyrighted e-books, reveals newly released snippets of a deposition he gave late last year.

The deposition, which was part of a complaint submitted to the court by plaintiffs’ attorneys, is related to the AI copyright case Kadrey v. Meta. It’s one of many such cases winding through the U.S. court system that’s pitting AI companies against authors and other IP holders. For the most part, the defendants in these cases – AI companies – claim that training on copyrighted content is “fair use.” Many copyright holders disagree."

Wednesday, January 15, 2025

'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line; NPR, January 14, 2025

 , NPR; 'The New York Times' takes OpenAI to court. ChatGPT's future could be on the line

"A group of news organizations, led by The New York Times, took ChatGPT maker OpenAI to federal court on Tuesday in a hearing that could determine whether the tech company has to face the publishers in a high-profile copyright infringement trial.

Three publishers' lawsuits against OpenAI and its financial backer Microsoft have been merged into one case. Leading each of the three combined cases are the Times, The New York Daily News and the Center for Investigative Reporting.

Other publishers, like the Associated Press, News Corp. and Vox Media, have reached content-sharing deals with OpenAI, but the three litigants in this case are taking the opposite path: going on the offensive."

Tuesday, December 31, 2024

Column: A Faulkner classic and Popeye enter the public domain while copyright only gets more confusing; Los Angeles Times, December 31, 2024

 Michael Hiltzik , Los Angeles Times; Column: A Faulkner classic and Popeye enter the public domain while copyright only gets more confusing

"The annual flow of copyrighted works into the public domain underscores how the progressive lengthening of copyright protection is counter to the public interest—indeed, to the interests of creative artists. The initial U.S. copyright act, passed in 1790, provided for a term of 28 years including a 14-year renewal. In 1909, that was extended to 56 years including a 28-year renewal.

In 1976, the term was changed to the creator’s life plus 50 years. In 1998, Congress passed the Copyright Term Extension Act, which is known as the Sonny Bono Act after its chief promoter on Capitol Hill. That law extended the basic term to life plus 70 years; works for hire (in which a third party owns the rights to a creative work), pseudonymous and anonymous works were protected for 95 years from first publication or 120 years from creation, whichever is shorter.

Along the way, Congress extended copyright protection from written works to movies, recordings, performances and ultimately to almost all works, both published and unpublished.

Once a work enters the public domain, Jenkins observes, “community theaters can screen the films. Youth orchestras can perform the music publicly, without paying licensing fees. Online repositories such as the Internet Archive, HathiTrust, Google Books and the New York Public Library can make works fully available online. This helps enable both access to and preservation of cultural materials that might otherwise be lost to history.”"

Anthropic Agrees to Enforce Copyright Guardrails on New AI Tools; Bloomberg Law, December 30, 2024

Annelise Levy, Bloomberg Law; Anthropic Agrees to Enforce Copyright Guardrails on New AI Tools

"Anthropic PBC must apply guardrails to prevent its future AI tools from producing infringing copyrighted content, according to a Monday agreement reached with music publishers suing the company for infringing protected song lyrics. 

Eight music publishers—including Universal Music Corp. and Concord Music Group—and Anthropic filed a stipulation partly resolving the publishers’ preliminary injunction motion in the US District Court for the Northern District of California. The publishers’ request that Anthropic refrain from using unauthorized copies of lyrics to train future AI models remains pending."

Friday, December 27, 2024

Tech companies face tough AI copyright questions in 2025; Reuters, December 27, 2024

 , Reuters ; Tech companies face tough AI copyright questions in 2025

"The new year may bring pivotal developments in a series of copyright lawsuits that could shape the future business of artificial intelligence.

The lawsuits from authors, news outlets, visual artists, musicians and other copyright owners accuse OpenAI, Anthropic, Meta Platforms and other technology companies of using their work to train chatbots and other AI-based content generators without permission or payment.
Courts will likely begin hearing arguments starting next year on whether the defendants' copying amounts to "fair use," which could be the AI copyright war's defining legal question."

Saturday, December 21, 2024

Every AI Copyright Lawsuit in the US, Visualized; Wired, December 19, 2024

Kate Knibbs, Wired; Every AI Copyright Lawsuit in the US, Visualized

"WIRED is keeping close tabs on how each of these lawsuits unfold. We’ve created visualizations to help you track and contextualize which companies and rights holders are involved, where the cases have been filed, what they’re alleging, and everything else you need to know."

Tuesday, December 3, 2024

Getty Images CEO Calls AI Training Models ‘Pure Theft’; PetaPixel, December 3, 2024

 MATT GROWCOOT , PetaPixel; Getty Images CEO Calls AI Training Models ‘Pure Theft’

"The CEO of Getty Images has penned a column in which he calls the practice of scraping photos and other content from the open web by AI companies “pure theft”.

Writing for Fortune, Craig Peters argues that fair use rules must be respected and that AI training practices are in contravention of those rules...

“I am responsible for an organization that employs over 1,700 individuals and represents the work of more than 600,000 journalists and creators worldwide,” writes Peters. “Copyright is at the very core of our business and the livelihood of those we employ and represent.”"

Friday, November 29, 2024

Major Canadian News Outlets Sue OpenAI in New Copyright Case; The New York Times, November 29, 2024

 , The New York Times ; Major Canadian News Outlets Sue OpenAI in New Copyright Case

"A coalition of Canada’s biggest news organizations is suing OpenAI, the maker of the artificial intelligence chatbot, ChatGPT, accusing the company of illegally using their content in the first case of its kind in the country.

Five of the country’s major news companies, including the publishers of its top newspapers, newswires and the national broadcaster, filed the joint suit in the Ontario Superior Court of Justice on Friday morning...

The Canadian outlets, which include the Globe and Mail, the Toronto Star and the CBC — the Canadian Broadcasting Corporation — are seeking what could add up to billions of dollars in damages. They are asking for 20,000 Canadian dollars, or $14,700, per article they claim was illegally scraped and used to train ChatGPT.

They are also seeking a share of the profits made by what they claim is OpenAI’s misuse of their content, as well as for the company to stop such practices in the future."

Tuesday, November 5, 2024

Penguin Random House books now explicitly say ‘no’ to AI training; The Verge, October 18, 2024

Emma Roth , The Verge; Penguin Random House books now explicitly say ‘no’ to AI training

"Book publisher Penguin Random House is putting its stance on AI training in print. The standard copyright page on both new and reprinted books will now say, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems,” according to a report from The Bookseller spotted by Gizmodo. 

The clause also notes that Penguin Random House “expressly reserves this work from the text and data mining exception” in line with the European Union’s laws. The Bookseller says that Penguin Random House appears to be the first major publisher to account for AI on its copyright page. 

What gets printed on that page might be a warning shot, but it also has little to do with actual copyright law. The amended page is sort of like Penguin Random House’s version of a robots.txt file, which websites will sometimes use to ask AI companies and others not to scrape their content. But robots.txt isn’t a legal mechanism; it’s a voluntarily-adopted norm across the web. Copyright protections exist regardless of whether the copyright page is slipped into the front of the book, and fair use and other defenses (if applicable!) also exist even if the rights holder says they do not."

Friday, November 1, 2024

 Annelise Gilbert , Bloomberg Law; AI Training Study to Come This Year, Copyright Office Says

"The Copyright Office’s report on the legal implications of training artificial intelligence models on copyrighted works is still expected to publish by the end of 2024, the office’s director told lawmakers.

Director Shira Perlmutter on Wednesday said the office aims to complete the remaining two sections of its three-part AI report in the next two months—one on the copyrightability of generative AI output and the other about liability, licensing, and fair use in regards to AI training on protected works."

Wednesday, October 23, 2024

Former OpenAI Researcher Says the Company Broke Copyright Law; The New York Times, October 23, 2024

, The New York Times; Former OpenAI Researcher Says the Company Broke Copyright Law

"Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

“This is not a sustainable model for the internet ecosystem as a whole,” he told The Times."

Monday, October 21, 2024

Microsoft boss urges rethink of copyright laws for AI; The Times, October 21, 2024

 Katie Prescott, The Times; Microsoft boss urges rethink of copyright laws for AI

"The boss of Microsoft has called for a rethink of copyright laws so that tech giants are able to train artificial intelligence models without risk of infringing intellectual property rights.

Satya Nadella, chief executive of the technology multinational, praised Japan’s more flexible copyright laws and said that governments need to develop a new legal framework to define “fair use” of material, which allows people in certain situations to use intellectual property without permission.

Nadella, 57, said governments needed to iron out the rules. “What are the bounds for copyright, which obviously have to be protected? What’s fair use?” he said. “For any society to move forward, you need to know what is fair use.”"

Saturday, October 19, 2024

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That; Electronic Frontier Foundation (EFF), October 16, 2024

  CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF); Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

"For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law:"

Saturday, September 7, 2024

Trump’s other legal problem: Copyright infringement claims; The Washington Post, September 7, 2024

 

 The Washington Post; Trump’s other legal problem: Copyright infringement claims

"Music industry experts and copyright law attorneys say the cases, as well as Trump’s decision to continue playing certain songs despite artists’ requests that he desist, underscore the complex legalities of copyright infringement in today’s digital, streaming and licensing era — and could set an important precedent on the of use of popular music in political campaigns."

Thursday, September 5, 2024

The Internet Archive Loses Its Appeal of a Major Copyright Case; Wired, September 4, 2024

 Kate Knibbs, Wired; The Internet Archive Loses Its Appeal of a Major Copyright Case

"THE INTERNET ARCHIVE has lost a major legal battle—in a decision that could have a significant impact on the future of internet history. Today, the US Court of Appeals for the Second Circuit ruled against the long-running digital archive, upholding an earlier ruling in Hachette v. Internet Archive that found that one of the Internet Archive’s book digitization projects violated copyright law.

Notably, the appeals court’s ruling rejects the Internet Archive’s argument that its lending practices were shielded by the fair use doctrine, which permits for copyright infringement in certain circumstances, calling it “unpersuasive.”"

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."