Showing posts with label legal issues. Show all posts
Showing posts with label legal issues. Show all posts

Wednesday, January 29, 2025

Copyright Office Releases Part 2 of Artificial Intelligence Report; U.S. Copyright Office, Issue No. 1060, January 29, 2025

  U.S. Copyright Office, Issue No. 1060Copyright Office Releases Part 2 of Artificial Intelligence Report

"Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.

“After considering the extensive public comments and the current state of technological development, our conclusions turn on the centrality of human creativity to copyright,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and AI. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed more than 10,000 responsive comments, which served to inform these conclusions.

The Report is being released in three Parts. Part 1 was published on July 31, 2024, and recommended federal legislation to respond to the unauthorized distribution of digital replicas that realistically but falsely depict an individual. The final, forthcoming Part 3 will address the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.

As announced last year, the Office also plans to supplement its March 2023 registration guidance and update the relevant sections of the Compendium of U.S. Copyright Office Practices.

For more information about the Copyright Office’s AI Initiative, please visit the website."

Tuesday, July 4, 2023

Legitimacy Of 'Customer' In Supreme Court Gay Rights Case Raises Ethical, Legal Flags; AP via Huff Post, July 3, 2023

Alanna Durkin Richer and Colleen Slevin , AP via Huff Post; Legitimacy Of 'Customer' In Supreme Court Gay Rights Case Raises Ethical, Legal Flags

"A Christian graphic artist who the Supreme Court said can refuse to make wedding websites for gay couples pointed during her lawsuit to a request from a man named “Stewart” and his husband-to-be. The twist? Stewart says it never happened.

The revelation has raised questions about how Lorie Smith’s case was allowed to proceed all the way to the nation’s highest court with such an apparent misrepresentation and whether the state of Colorado, which lost the case last week, has any legal recourse...

COULD THE REVELATION IMPACT THE CASE NOW?

It’s highly unlikely. The would-be customer’s request was not the basis for Smith’s original lawsuit, nor was it cited by the high court as the reason for ruling in her favor. Legal standing, or the right to bring a lawsuit, generally requires the person bringing the case to show that they have suffered some sort of harm. But pre-enforcement challenges — like the one Smith brought — are allowed in certain cases if the person can show they face a credible threat of prosecution or sanctions unless they conform to the law.

The 10th U.S. Circuit Court of Appeals, which reviewed the case before the Supreme Court, found that Smith had standing to sue. That appeals court noted that Colorado had a history of past enforcement “against nearly identical conduct” and that the state decline to promise that it wouldn’t go after Smith if she violated the law."

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""