Showing posts with label AI outputs. Show all posts
Showing posts with label AI outputs. Show all posts

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

 National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Extracting books from production language models; Cornell University, January 6, 2026

 Ahmed AhmedA. Feder CooperSanmi KoyejoPercy Liang, Cornell University; Extracting books from production language models

"Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

Tuesday, January 13, 2026

‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn; Forbes, January 9, 2026

Rob Salkowitz, , Forbes; ‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn

"Despite this string of successes, creators like BT caution that content owners need to move quickly to secure any kind of terms. “A lot of artists have their heads in the sand with respect to AI,” he said. “The fact is, if they don’t come to some kind of agreement, they may end up with nothing.”

The concern is that AI models are increasingly being trained on synthetic data: that is, on the output of AI systems, rather than on content attributable to any individual creator or rights owner. Gartner estimates that 75% of AI training data in 2026 will be synthetic. That number could hit 100% by 2030. Once the tech companies no longer need human-produced content, they will stop paying for it.

“The quality of outputs from AI systems has been improving dramatically, which means that it is possible to train on synthetic data without risking model collapse,” said Dr. Daniela Braga, founder and CEO of the data training firm Defined.ai, in a separate interview at CES. “The window is definitely closing for individual rights owners to secure favorable terms.”

Other experts suggest that these claims may be overstated.

Braga says the best way creators can protect themselves is to do business with ethical companies willing to provide compensation for high-quality human-produced content and represent the superior value of that content to their customers. As models grow in capabilities, the need will shift from sheer volume of data to data that is appropriately tagged and annotated to fit easily into specific use cases.

There remain some profound questions around the sustainability of AI from a business standpoint, with demand for services among enterprise and consumers lagging the massive, and massively expensive, build-out of capacity. For some artists opposed to generative AI in its entirety, there may be the temptation to wait it out until the bubble bursts. After all, these artists created their work to be enjoyed by humans, not to be consumed in bulk by machines threatening their livelihoods. In light of those objections, the prospect of a meager payout might seem unappealing."

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

 , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

Friday, December 12, 2025

The Disney-OpenAI Deal Redefines the AI Copyright War; Wired, December 11, 2025

BRIAN BARRETT, Wired; The Disney-OpenAI Deal Redefines the AI Copyright War

 "“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag."

Thursday, December 11, 2025

Disney says Google AI infringes copyright “on a massive scale”; Ars Technica, December 11, 2025

 RYAN WHITWAM , Ars Technica; Disney says Google AI infringes copyright “on a massive scale”

"Disney has sent a cease and desist to Google, alleging the company’s AI tools are infringing Disney’s copyrights “on a massive scale.”

According to the letter, Google is violating the entertainment conglomerate’s intellectual property in multiple ways. The legal notice says Google has copied a “large corpus” of Disney’s works to train its gen AI models, which is believable, as Google’s image and video models will happily produce popular Disney characters—they couldn’t do that without feeding the models lots of Disney data.

The C&D also takes issue with Google for distributing “copies of its protected works” to consumers."

Has Cambridge-based AI music upstart Suno 'gone legit'?; WBUR, December 11, 2025

 

, WBUR; Has Cambridge-based AI music upstart Suno 'gone legit'?

"The Cambridge-based AI music company Suno, which has been besieged by lawsuits from record labels, is now teaming up with behemoth label Warner Music. Under a new partnership, Warner will license music in its catalogue for use by Suno's AI.

Copyright law experts Peter Karol and Bhamati Viswanathan join WBUR's Morning Edition to discuss what the deal between Suno and Warner Music means for the future of intellectual property."

Tuesday, December 2, 2025

Two AI copyright cases, two very different outcomes – here’s why; The Conversation, December 1, 2025

 Reader in Intellectual Property Law, Brunel University of London , The Conversation; Two AI copyright cases, two very different outcomes – here’s why


"Artificial intelligence companies and the creative industries are locked in an ongoing battle, being played out in the courts. The thread that pulls all these lawsuits together is copyright.

There are now over 60 ongoing lawsuits in the US where creators and rightsholders are suing AI companies. Meanwhile, we have recently seen decisions in the first court cases from the UK and Germany – here’s what happened in those...

Although the circumstances of the cases are slightly different, the heart of the issue was the same. Do AI models reproduce copyright-protected content in their training process and in generating outputs? The German court decided they do, whereas the UK court took a different view.

Both cases could be appealed and others are underway, so things may change. But the ending we want to see is one where AI and the creative industries come together in agreement. This would preferably happen with the use of copyright licences that benefit them both.

Importantly, it would also come with the consent of – and fair payment to – creators of the content that makes both their industries go round."

Wednesday, November 26, 2025

AI, ethics, and the lawyer's duty after Noland v. Land of the Free; Daily Journal, November 24, 2025

Reza Torkzadeh, Daily Journal; AI, ethics, and the lawyer's duty after Noland v. Land of the Free

"Noland establishes a bright line for California lawyers. AI may assist with drafting or research, but it does not replace judgment, verification or ethical responsibility. Technology may change how legal work is produced -- it does not change who is accountable for it."

GEORGE C. YOUNG AMERICAN INNS OF COURT EXPLORES ETHICS AND PITFALLS OF AI IN THE COURTROOM; The Florida Bar, November 26, 2025

The Florida Bar; GEORGE C. YOUNG AMERICAN INNS OF COURT EXPLORES ETHICS AND PITFALLS OF AI IN THE COURTROOM

"The George C. Young American Inns of Court continued its ongoing focus on artificial intelligence with a recent program titled, “The Use of AI to Craft Openings, Closings, and Directing Cross-Examination: Ethical Imperatives and Practical Realities.”...

Demonstrations showed that many members could not distinguish AI-generated narratives from those written by humans, highlighting the technology’s increasingly high-quality output. However, presenters also noted recurring drawbacks. AI-generated direct and cross-examinations frequently included prohibited or incorrect elements such as hearsay, compound questioning, and fabricated details — jokingly referred to as “ghost people” — distinguishing factual hallucinations from the better-known “phantom citation” problem.

The program concluded with a reminder that while AI may streamline drafting and help lawyers think creatively, professional judgment cannot be outsourced. The ultimate responsibility for accuracy, ethics, and advocacy remains with the lawyer."

Friday, November 21, 2025

Japan Police Accuse Man of Unauthorized Use of AI-Generated Image in Landmark Copyright Case; IGN, November 21, 2025

 , IGN ; Japan Police Accuse Man of Unauthorized Use of AI-Generated Image in Landmark Copyright Case

"Police in Japan have accused a man of unauthorized reproduction of an AI-generated image. This is believed to be the first ever legal case in Japan where an AI-generated image has been treated as a copyrighted work under the country’s Copyright Act.

According to the Yomiuri Shimbun and spotted by Dexerto, the case relates to an AI-generated image created using Stable Diffusion back in 2024 by a man in his 20s from Japan’s Chiba prefecture. This image was then allegedly reused without permission by a 27-year-old man (also from Chiba) for the cover of his commercially-available book. 

The original creator of the image told the Yomiuri Shimbun that he had used over 20,000 prompts to generate the final picture. The police allege that the creator had sufficient involvement in the AI image’s creation, and the matter has been referred to the Chiba District Public Prosecutors Office.

Japan’s Copyright Act defines a copyrighted work as a “creatively produced expression of thoughts or sentiments that falls within the literary, academic, artistic, or musical domain.” In regard to whether an AI-generated image can be copyrighted or not, the Agency of Cultural Affairs has stated that an AI image generated with no instructions or very basic instructions from a human is not a “creatively produced expression of thoughts or sentiments” and therefore not considered to meet the requirements to be copyrighted work.

However, if a person has used AI as a tool to creatively express thoughts or feelings, the AI-generated output might be considered a copyrighted work. This is to be decided on a case-by-case basis. The process behind the creation of the specific AI-generated image has to be looked at in order to determine whether it can be considered to be creative enough to be termed a copyrighted work. Key criteria are the amount of detailed prompts, the refining of instructions over repeated generation attempts, and creative selections or changes to outputs."

Thursday, November 13, 2025

OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA; The Bookseller, November 12, 2025

 MATILDA BATTERSBY , The Bookseller; OpenAI copyright case reveals 'ease with which generative AI can devastate the market', says PA

"A judge’s ruling that legal action by authors against OpenAI for copyright infringement can go ahead reveals “the ease with which generative AI can devastate the market”, according to the Publishers Association (PA).

Last week, a federal judge in the US refused OpenAI’s attempts to dismiss claims by authors that text summaries of published works by ChatGPT (which is owned by OpenAI) infringes their copyrights.

The lawsuit, which is being heard in New York, brings together cases from a number of authors, as well as the Authors Guild, filed in various courts.

In his ruling, which upheld the authors’ right to attempt to sue OpenAI, District Judge Sidney Stein compared George RR Martin’s Game of Thrones to summaries of the novel created by ChatGPT.

Judge Stein said: “[A] discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work because the summary conveys the overall tone and feel of the original work by parroting the plot, characters and themes of the original.”

The class action consolidates 12 complaints being brought against OpenAI and Microsoft. It argues copyrighted books were reproduced to train OpenAI’s artificial intelligence large language models (LLM) and, crucially, that LLMs, including ChatGPT, can infringe copyright via their output, ie the text produced when asked a question.

This landmark legal case is the first to examine whether the output of an AI chatbot infringes copyright, rather than looking at whether the training of the model was an infringement."

Monday, October 20, 2025

The platform exposing exactly how much copyrighted art is used by AI tools; The Guardian, October 18, 2025

 , The Guardian; The platform exposing exactly how much copyrighted art is used by AI tools

"The US tech platform Vermillio tracks use of a client’s intellectual property online and claims it is possible to trace, approximately, the percentage to which an AI generated image has drawn on pre-existing copyrighted material."

Friday, October 10, 2025

You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out; Gizmodo, October 8, 2025

 , Gizmodo; You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

 "OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well."

Sunday, October 5, 2025

OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'; PC Gamer, October 5, 2025

 , PC Gamer ; OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'

"This video is just one of many examples, but you'll have a much harder time finding Sora-generated videos containing Marvel or Disney characters. As reported by Automaton, Sora appears to be refusing prompts containing references to American IP, but Japanese IP didn't seem to be getting the same treatment over the past week.

Japanese lawyer and House of Representatives member Akihisa Shiozaki called for action to protect creatives in a post on X (formerly Twitter), which has been translated by Automaton: "I’ve tried out [Sora 2] myself, but I felt that it poses a serious legal and political problem. We need to take immediate action if we want to protect leading Japanese creators and the domestic content industry, and help them further develop. (I wonder why Disney and Marvel characters can’t be displayed).""

Monday, September 22, 2025

Librarians Are Being Asked to Find AI-Hallucinated Books; 404 Media, September 18, 2025

CLAIRE WOODCOCK , 404 Media; Librarians Are Being Asked to Find AI-Hallucinated Books

"Reference librarian Eddie Kristan said lenders at the library where he works have been asking him to find books that don’t exist without realizing they were hallucinated by AI ever since the release of GPT-3.5 in late 2022. But the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year. At the time, the freelancer told 404 Media he used AI to produce the list without fact checking outputs before syndication. 

“We had people coming into the library and asking for those authors,” Kristan told 404 Media. He’s receiving similar requests for other types of media that don’t exist because they’ve been hallucinated by other AI-powered features. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.” 

AI tools are changing the nature of how patrons treat librarians, both online and IRL. Alison Macrina, executive director of Library Freedom Project, told 404 Media early results from a recent survey of emerging trends in how AI tools are impacting libraries indicate that patrons are growing more trusting of their preferred generative AI tool or product, and the veracity of the outputs they receive. She said librarians report being treated like robots over library reference chat, and patrons getting defensive over the veracity of recommendations they’ve received from an AI-powered chatbot. Essentially, like more people trust their preferred LLM over their human librarian."

Monday, August 25, 2025

Who owns the copyright for AI work?; Financial Times, August 24, 2025

  , Financial Times; Who owns the copyright for AI work?

"Generative artificial intelligence poses two copyright puzzles. The first is the widely discussed question of compensation for work used to train AI models. The second, which has yet to receive as much attention, concerns the work that AI produces. Copyright is granted to authors. So what happens to work that has no human author?"

Sunday, August 24, 2025

Suetopia: Generative AI is a lawsuit waiting to happen to your business; The Register, August 12, 2025

Adam Pitch, The Register ; Suetopia: Generative AI is a lawsuit waiting to happen to your business

"More and more US companies are using generative AI as a way to save money they might otherwise pay creative professionals. But they're not thinking about the legal bills.

You could be asking an AI to create public-facing communications for your company, such as a logo, promotional copy, or an entire website. If those materials happen to look like copyrighted works, you may be hearing from a lawyer.

"It's pretty clear that if you create something that's substantially similar to a copyrighted work that an infringement has occurred, unless it's for a fair use purpose," said Kit Walsh, the Electronic Frontier Foundation's Director of AI and Access-to-Knowledge Legal Projects."

Wednesday, July 9, 2025

Why the new rulings on AI copyright might actually be good news for publishers; Fast Company, July 9, 2025

 PETE PACHAL, Fast Company; Why the new rulings on AI copyright might actually be good news for publishers

"The outcomes of both cases were more mixed than the headlines suggest, and they are also deeply instructive. Far from closing the door on copyright holders, they point to places where litigants might find a key...

Taken together, the three cases point to a clearer path forward for publishers building copyright cases against Big AI:

Focus on outputs instead of inputs: It’s not enough that someone hoovered up your work. To build a solid case, you need to show that what the AI company did with it reproduced it in some form. So far, no court has definitively decided whether AI outputs are meaningfully different enough to count as “transformative” in the eyes of copyright law, but it should be noted that courts have ruled in the past that copyright violation can occur even when small parts of the work are copied—ifthose parts represent the “heart” of the original.

Show market harm: This looks increasingly like the main battle. Now that we have a lot of data on how AI search engines and chatbots—which, to be clear, are outputs—are affecting the online behavior of news consumers, the case that an AI service harms the media market is easier to make than it was a year ago. In addition, the emergence of licensing deals between publishers and AI companies is evidence that there’s market harm by creating outputs without offering such a deal.

Question source legitimacy: Was the content legally acquired or pirated? The Anthropic case opens this up as a possible attack vector for publishers. If they can prove scraping occurred through paywalls—without subscribing first—that could be a violation even absent any outputs."