Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Tuesday, March 31, 2026

Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm; Reuters, March 16, 2026

  and  , Reuters; Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm

"In 2025, U.S. courts issued the first substantive, merits-stage decisions addressing whether the use of copyrighted works to train generative artificial intelligence systems constitutes "fair use." Although these rulings do not settle all open questions — and in some respects highlight emerging judicial disagreements — they represent a significant inflection point in copyright law's response to large language models, image generators, and other foundation models.

Taken together, these cases establish early guideposts for AI developers, publishers, media companies, and enterprises deploying generative AI systems. Below, we summarize the most important copyright ​decisions and pending cases shaping the law in 2025...

Conclusion and recommendations

The ​2025 decisions reflect cautious but meaningful progress in defining how copyright law applies to generative AI. Courts are increasingly receptive to fair use arguments for training on lawfully acquired data, deeply skeptical of speculative market-harm claims, and uniformly intolerant of piracy. At the same time, cases involving direct competition, news content, and human likeness may test the limits of these early rulings."

Monday, March 30, 2026

Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say; Axios, March 27, 2026

 Julie Bowen, Axios ; Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say

"Washington, D.C. — As policymakers grapple with how to regulate AI, the hardest questions around copyright and fair use are being punted to the courts, according to governance, creator, and technology experts at an Axios expert voices roundtable.

The big picture: With Congress moving slowly and disagreements over policy, judges are becoming the primary deciders of how AI and the creators work together — or don't.


That's partly by necessity: "Fair use is incredibly complicated — case by case, fact specific," News/Media Alliance president and CEO Danielle Coffey said.


"Each case that we get … we start to get these new guideposts," Jones Walker partner Graham Ryan said.


Ryan said they expect at least three fair use decisions this year that will have implications for the broader AI-artist ecosystem.


Axios' Maria Curi and Ashley Gold moderated the March 25 discussion, which was sponsored by Adobe.

What they're saying: Legal uncertainty remains. For example, two courts within the same district, and during the same week, differed in the reasoning behind their rulings on similar matters of fair use and AI.


"There is a current, live controversy over … the extant understanding of the fourth factor in fair use, which is: Does the copy replace the market for the work?" said Kevin Bankston, senior adviser for the Center for Democracy & Technology.


Still, "we have been trying to support the process through the courts, because we think there is a really strong framework in copyright law for protecting artists right now," according to Public Knowledge president and CEO Chris Lewis."

Sunday, March 29, 2026

Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Friday, March 27, 2026

OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens; Futurism, March 26, 2026

 , Futurism; OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens

"The company’s panicked executives have made it abundantly clear that distracting “side quests” must be abandoned, while doubling down on both enterprise and coding. The purported goal is to stuff all of its offerings into a single “super app,” taking a page out of xAI CEO Elon Musk’s playbook.

These aren’t empty words by OpenAI execs. First, news emerged this week that the company is killing its disastrous Sora video AI slop app, lighting what was supposed to be a groundbreaking $1 billion deal with Disney on fire.

Now, the company is axing its spicy “adult mode” chatbot, as the Financial Timesreports, once again highlighting how much pressure the company is under as competitors aren’t just catching up, but snatching up precious paying customers from right under its nose."

Q&A: The UK’s Copyright Report - A Gift to Creators, a Problem for AI; JD Supra, March 27, 2026

 Oliver Howley, JD Supra; Q&A: The UK’s Copyright Report - A Gift to Creators, a Problem for AI

"The UK Government has released its long-awaited copyright report, framed as an attempt to reconcile the competing interests of creators, technology companies and the wider innovation ecosystem. Rightsholders will welcome it, while the UK’s AI sector will find less comfort.

Two core policy decisions (on training data and on the ownership of AI-generated outputs) mark a shift away from earlier, more developer-friendly proposals. Both decisions leave significant questions unanswered: how AI developers can lawfully assemble training data at scale, what happens to content produced with minimal human input, and whether the UK’s current posture is sustainable in a world where capital and training runs are increasingly mobile.

In this Q&A, Oliver Howley, partner in Proskauer’s TMT Group and one of The Lawyer’s 2026 Hot 100, unpacks what the report says on these two decisions, what it leaves open, and what it means for developers, investors and rightsholders navigating the uncertainty ahead."

Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t; People, March 26, 2026

Karla Marie Sanford

, People ; Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t

“They call us old stupid farmers, you know, but we’re not,” said Ida Huddleston, 82

"A Kentucky mother and daughter are continuing to open up about their decision to keep their farmland rather than accept a multi-million payout that could pave the way for a data center, which may still be happening anyway.

“My grandfather and great-grandfather and a whole bunch of family have all lived here for years, paid taxes on it, fed a nation off of it,” Delsia Bare told CBS affiliate WKRC. “Even raised wheat through the Depression and kept bread lines up in the United States of America when people didn’t have anything else.”

Bare and her 82-year-old mom Ida Huddleston own hundreds of acres of farmland outside Maysville, according to WKRC. Together, the two have rejected over $26 million to sell part of the farmland to an undisclosed Fortune 100 company."

Thursday, March 26, 2026

OpenAI shutters AI video generator Sora in abrupt announcement; The Guardian, March 24, 2026

, The Guardian; OpenAI shutters AI video generator Sora in abrupt announcement

Tech firm ‘says goodbye’ to Sora, made publicly available in 2024, just six months after its launch of a stand-alone app

"In an abrupt announcement on Tuesday, OpenAI said it was “saying goodbye” to its AI video generator Sora. The move comes just six months after the company’s splashy launch of a stand-alone app with which people could make and share hyper-realistic AI videos in a scrolling social feed."

Tuesday, March 24, 2026

Chicken Soup for the Soul Sues AI Firms for Copyright Infringement; Publishers Weekly, March 20, 2026

 Ed Nawotka , Publishers Weekly; Chicken Soup for the Soul Sues AI Firms for Copyright Infringement

"Chicken Soup for the Soul is suing tech companies OpenAI, Anthropic, Google, Meta, xAI, Perplexity, Apple, and Nvidia for copyright infringement. The suit, filed March 17 in the Northern District of California, alleges that hundreds of its copyrighted works were ingested without authorization or compensation to train large language models...

Much like the complaint filed in December by author John Carreyrou and others against many of the same defendants, this filing also aims to challenge the class-action model that has dominated AI copyright litigation.

Pointing to the pending Anthropic settlement in the Northern District of California, the suit notes that the framework would pay rights holders approximately $3,000 per work—"just 2% of the Copyright Act's statutory ceiling of $150,000 per willfully infringed work." The complaint states that such settlements "seem to serve Defendants, not creators."

Chicken Soup for the Soul is instead seeking individualized statutory damages determined by a jury. The law firms behind the suit say more than 1,000 authors representing more than 5,000 works have signed on to the same approach."

Saturday, March 21, 2026

The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue; Fortune, March 21, 2026

 , Fortune; The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue

"In a filing submitted to the Southern District of New York, the companies accuse OpenAI of cannibalizing the traffic and ad revenue that publishers depend on to survive. “ChatGPT starves web publishers, like [the] Plaintiffs, of revenue,” the complaint reads. Where a traditional search engine sends users to a publisher’s website, Britannica and Merriam-Webster allege ChatGPT instead absorbs the content and delivers a polished answer. It also alleges the AI company fed its LLM with researched and fact-checked work of the companies’ hundreds of human writers and editors...

In an apt example, the complaint describes a prompt asking “How does Merriam-Webster define plagiarize?” to which the model reportedly responded with a definition identical to the one found in the Merriam-Webster dictionary. The complaint adds that the dictionary has been registered with the U.S. Copyright Office."

Thursday, March 19, 2026

UK reverses course on AI copyright position after backlash; Engadget, March 18, 2026

 Will Shanklin , Engadget; UK reverses course on AI copyright position after backlash

"halk up a win for creative artists against AI companies. On Wednesday, the UK government abandoned its previous position on copyrighted works. It’s currently working on a data bill that, if unaltered, would have allowed AI companies like Google and OpenAI to train models on copyrighted materials without consent. Artists and other copyright holders would only have been offered a mere opt-out clause.

After significant backlash, the UK backed off from that position. "We have listened," Technology Secretary Liz Kendall said on Wednesday. However, the government’s new stance is, well, not a stance at all. It currently "no longer has a preferred option" about how to handle the issue.

Still, backpedaling from its previous position is viewed as a win for artists. UK Music CEO Tom Kiehl described the decision as "a major victory," while promising to work with the government on the next steps."

Tuesday, March 17, 2026

Now OpenAI is getting sued by the dictionary; Quartz, March 17, 2026

 Quartz Staff, Quartz; Now OpenAI is getting sued by the dictionary

Encyclopedia Britannica and Merriam-Webster sued the ChatGPT maker, accusing it of copying almost 100,000 articles to train its AI models

"Encyclopedia Britannica and its subsidiary Merriam-Webster have filed suit against OpenAI, alleging that the ChatGPT maker copied their copyrighted content without authorization to train its large language models,

The lawsuit, filed in Manhattan federal court last week, alleges that OpenAI used close to 100,000 Britannica articles to train its models, and that ChatGPT responses frequently reproduce or closely paraphrase Britannica's reference content, including encyclopedia articles and dictionary entries. The complaint also alleges OpenAI uses a retrieval-augmented generation system to pull from Britannica's content in real time when generating responses."

Senators tell ByteDance to ‘immediately shut down’ Seedance AI video app; CNBC, March 17, 2026

 Emily Wilkins, CNBC ;  Senators tell ByteDance to ‘immediately shut down’ Seedance AI video app

"Sens. Marsha Blackburn and Peter Welch are calling for a halt to the new version of ByteDance’s artificial intelligence app, Seedance, which generates videos of real people and licensed characters, raising copyright and intellectual property concerns. 

Seedance 2.0 “is the most glaring example of copyright infringement from a ByteDance product to date, and you must immediately shut down Seedance and implement meaningful safeguards to prevent further infringing outputs,” Blackburn, R-Tenn., and Welch, D-Vt., wrote in a letter to ByteDance CEO Liang Rubo that was first obtained by CNBC.

Their letter is a sign of growing concerns on Capitol Hill about how AI companies are developing and using their models and whether proper protections are in place for those who generate the materials the models train from."

Monday, March 16, 2026

This Bill Would Force AI Companies to Disclose Copyrighted Works; PetaPixel, March 16, 2026

Pesala Bandara, PetaPixel; This Bill Would Force AI Companies to Disclose Copyrighted Works

"U.S. Senators Adam Schiff, a Democrat from California, and John Curtis, a Republican from Utah, have introduced the Copyright Labeling and Ethical AI Reporting Act, known as the CLEAR Act. The proposed legislation would require companies developing AI models to report when copyrighted material is used to train those systems.

If passed, the legislation could increase transparency around the material used to train generative AI systems, including copyrighted photographs."

UK to rule out sweeping AI copyright overhaul; Politico, March 11, 2026

 JOSEPH BAMBRIDGE, Politico; UK to rule out sweeping AI copyright overhaul 

The U.K. will rule out making creatives actively opt out of having their copyrighted material scraped by AI companies.

"The U.K. government will rule out sweeping reform of its copyright laws in a highly-anticipated policy update next week, according to three people briefed on government thinking and granted anonymity to speak freely. 

The people said the update, due by March 18, will state the government does not plan to take forward work on an “opt out” model, whereby rights holders would have to explicitly say they do not want their work used to train AI models. 


It comes amid intense pressure from rights holders and lawmakers not to pursue the “opt out” policy. The government previously said this was its “preferred option” to facilitate AI innovation in the U.K., before ministers were forced to row back."

Sunday, March 15, 2026

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

 Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

 , Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."

Saturday, March 14, 2026

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

  , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Wednesday, March 11, 2026

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

 Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."