Wednesday, July 23, 2025

Now Trump Wants to Rename Artificial Intelligence to This; The Daily Beast, July 23, 2025

Erkki Forster, The Daily Beast; Now Trump Wants to Rename Artificial Intelligence to This

"President Donald Trump has set his sights on a new linguistic enemy. While speaking at an artificial intelligence summit Wednesday, Trump realized mid-thought that he doesn’t like the word “artificial” at all. “I can’t stand it. I don’t even like the name,” the 79-year-old president said. ”You know, I don’t like anything that’s artificial so could we straighten that out please?” he asked, pointing to someone in the audience. “We should change the name.” As disbelieving laughter rippled through the room, Trump insisted, “I actually mean that—I don’t like the name ‘artificial’ anything.” He then offered an alternative—one he often uses to describe himself: “It’s not artificial. It’s genius. It’s pure genius.”"

Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases; Above The Law, July 23, 2025

Joe Patrice , Above The Law; Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases

"Don’t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might’ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.

Malaty will shoulder most of the blame as the link in the workflow who should’ve known better. That said, her article about AI ethics, written last year, doesn’t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations."

Trump derides copyright and state rules in AI Action Plan launch; Politico, July 23, 2025

MOHAR CHATTERJEE , Politico; Trump derides copyright and state rules in AI Action Plan launch

"President Donald Trump criticized copyright enforcement efforts and state-level AI regulations Wednesday as he launched the White House’s AI Action Plan on a mission to dominate the industry.

In remarks delivered at a “Winning the AI Race” summit hosted by the All-In Podcast and the Hill and Valley Forum in Washington, Trump said stringent copyright enforcement was unrealistic for the AI industry and would kneecap U.S. companies trying to compete globally, particularly against China.

“You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,” he said. “You just can’t do it because it’s not doable. ... China’s not doing it.”

Trump’s comments were a riff as his 28-page AI Action Plan did not wade into copyright and administration officials told reporters the issue should be left to the courts to decide.

Trump also signed three executive orders. One will fast track federal permitting, streamline reviews and “do everything possible to expedite construction of all major AI infrastructure projects,” Trump said. Another expands American exports of AI hardware and software. A third order bans the federal government from procuring AI technology “that has been infused with partisan bias or ideological agendas,” as Trump put it...

Trump echoed tech companies’ complaints about state AI laws creating a patchwork of regulation. “You can’t have one state holding you up,” he said. “We need one common sense federal standard that supersedes all states, supersedes everybody.”"

Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far; The Conversation, July 23, 2025

, Associate Professor of Information Science, Drexel University , The Conversation; Trump has fired the head of the Library of Congress, but the 225-year-old institution remains a ‘library for all’ – so far

"A library for all

Following Hayden’s dismissal, Trump appointed Deputy Attorney General Todd Blanche, his former personal lawyer, as acting librarian of Congress. 

Hayden has contended that her dismissal, which occurred alongside other firings of top civil servants, including the national archivist, represents a broad threat to people’s right to easily access free information. 

Democracies are not to be taken for granted,” Hayden said in June. She explained in an interview with CBS that she never had a problem with a presidential administration and is not sure why she was dismissed. 

“And the institutions that support democracy should not be taken for granted,” Hayden added. 

In her final annual report as librarian, Hayden characterized the institution as “truly, a library for all.” So far, even without her leadership, it remains just that."

AI chatbots remain overconfident -- even when they’re wrong; EurekAlert!, July 22, 2025

 CARNEGIE MELLON UNIVERSITY, EurekAlert!; AI chatbots remain overconfident -- even when they’re wrong

"Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities? 

Researchers asked both human participants and four large language models (LLMs) how confident they felt in their ability to answer trivia questions, predict the outcomes of NFL games or Academy Award ceremonies, or play a Pictionary-like image identification game. Both the people and the LLMs tended to be overconfident about how they would hypothetically perform. Interestingly, they also answered questions or identified images with relatively similar success rates.

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations, according to a study published today in the journal Memory & Cognition.

“Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers,” said Trent Cash, who recently completed a joint Ph.D. at Carnegie Mellon University in the departments of Social Decision Science and Psychology. “So, they’d still be a little bit overconfident, but not as overconfident.”

“The LLMs did not do that,” said Cash, who was lead author of the study. “They tended, if anything, to get more overconfident, even when they didn’t do so well on the task.”

The world of AI is changing rapidly each day, which makes drawing general conclusions about its applications challenging, Cash acknowledged. However, one strength of the study was that the data was collected over the course of two years, which meant using continuously updated versions of the LLMs known as ChatGPT, Bard/Gemini, Sonnet and Haiku. This means that AI overconfidence was detectable across different models over time.

“When an AI says something that seems a bit fishy, users may not be as skeptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted,” said Danny Oppenheimer, a professor in CMU’s Department of Social and Decision Sciences and coauthor of the study."

Commentary: A win-win-win path for AI in America; The Post & Courier, July 22, 2025

 Keith Kupferschmid, The Post & Courier; Commentary: A win-win-win path for AI in America

"Contrary to claims that these AI training deals are impossible to make at scale, a robust free market is already emerging in which hundreds (if not thousands) of licensed deals between AI companies and copyright owners have been reached. New research shows it is possible to create fully licensed data sets for AI.

No wonder one federal judge recently called claims that licensing is impractical “ridiculous,” given the billions at stake: “If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders.” Just like AI companies don’t dispute that they have to pay for energy, infrastructure, coding teams and the other inputs their operations require, they need to pay for creative works as well.

America’s example to the world is a free-market economy based on the rule of law, property rights and freedom to contract — so, let the market innovate solutions to these new (but not so new) licensing challenges. Let’s construct a pro-innovation, pro-worker approach that replaces the false choice of the AI alarmists with a positive, pro-America pathway to leadership on AI."

Wave of copyright lawsuits hit AI companies like Cambridge-based Suno; WBUR, July 23, 2025

 

WBUR; Wave of copyright lawsuits hit AI companies like Cambridge-based Suno

"Suno, a Cambridge company that generates AI music, faces multiple lawsuits alleging it illegally trained its model on copyrighted work. Peter Karol of Suffolk Law School and Bhamati Viswanathan of Columbia University Law School's Kernochan Center for Law, Media, and the Arts join WBUR's Morning Edition to explain how the suits against Suno fit into a broader legal battle over the future of creative work.

This segment aired on July 23, 2025. Audio will be available soon."

Tuesday, July 22, 2025

Trump Told Park Workers to Report Displays That ‘Disparage’ Americans. Here’s What They Flagged.; The New York Times, July 22, 2025

Maxine Joselow and , The New York Times; Trump Told Park Workers to Report Displays That ‘Disparage’ Americans. Here’s What They Flagged.


[Kip Currier: Trump's order directing National Park Service (NPS) staff to flag historical signs that "inappropriately disparage Americans" is contemptible and reads like a dystopian plot point befitting Fahrenheit 451 or 1984. It's also contrary to the advancement of knowledge and rigorous historical inquiry.

As a lifelong aficionado of the stunning diversity of America's national parks, I also find this directive deeply offensive because it seeks to sanitize and censor the complexity of U.S history: solely to satisfy one American's monarchical sense of what is and is not "appropriate". That is inherently un-American.

Thank goodness, then, that a heroic superteam of librarians, historians, and others are mobilizing right now to safeguard records of American history from erasure and expurgation. Until the day that fulsome, tangled, sobering, uplifting historical record -- our individual and collective history and legacy -- can be restored, appreciated, and learned from in all of its imperfectness.]


[Excerpt]

"According to internal documents reviewed by The New York Times, employees of the National Park Service have flagged descriptions and displays at scores of parks and historic sites for review in connection with President Trump’s directive to remove or cover up materials that “inappropriately disparage Americans.”

In an executive order in March, the president instructed the Park Service to review plaques, films and other materials presented to visitors at 433 sites around the country, with the aim of ensuring they emphasize the “progress of the American people” and the “grandeur of the American landscape.”

Employees had until last week to flag materials that could be changed or deleted, and the Trump administration said it would remove all “inappropriate” content by Sept. 17, according to the internal agency documents. The public also has been asked to submit potential changes.

In response, a coalition of librarians, historians and others organized through the University of Minnesota has launched a campaign called “Save Our Signs.” It is asking the public to take photos of existing content at national parks and upload it. The group is using those images to build a public archive before any materials may be altered. So far, it has more than 800 submissions."

Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025

 ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague

"The Limits of GenAI’s Simulated Humanity

  • Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
  • True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
  • Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.  
  • Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
  • Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
  • Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."

Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models; Deadline, July 21, 2025

 Ted Johnson , Deadline; Senators Introduce Bill To Restrict AI Companies’ Unauthorized Use Of Copyrighted Works For Training Models

"Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) introduced legislation on Monday that would restrict AI companies from using copyrighted material in their training models without the consent of the individual owner.

The AI Accountability and Personal Data Protection Act also would allow individuals to sue companies that uses their personal data or copyrighted works without their “express, prior consent.”

The bill addresses a raging debate between tech and content owners, one that has already led to extensive litigation. Companies like OpenAI have argued that the use of copyrighted materials in training models is a fair use, while figures including John Grisham and George R.R. Martin have challenged that notion."

Big Law Firms Bowed to Trump. A Corps of ‘Little Guys’ Jumped in to Fight Him.; The New York Times, July 21, 2025

 , The New York Times; Big Law Firms Bowed to Trump. A Corps of ‘Little Guys’ Jumped in to Fight Him.


[Kip Currier: A must-read article for anyone looking for lawyers willing "to fight the good fight". 

Kudos to these "officers of the court" who are standing up for the rule of law and the U.S. legal system's bedrock ethical principles and responsibilities.]


[Excerpt]

"President Trump’s executive orders seeking to punish big law firms have led some of them to acquiesce to him and left others reluctant to take on pro bono cases that could put them at odds with the administration.

But as opponents of the White House’s policies organized to fight Mr. Trump in court on a vast range of actions and policies, they quickly found that they did not need to rely on Big Law. Instead, an army of solo practitioners, former government litigators and small law firms stepped up to volunteer their time to challenge the administration’s agenda."

Monday, July 21, 2025

Following Trump cut to LGBTQ youth suicide hotline, California steps up to fill the gap; Governor Gavin Newsom, July 16, 2025

Governor Gavin Newsom; Following Trump cut to LGBTQ youth suicide hotline, California steps up to fill the gap

"Just weeks after the Trump administration announced that they would eliminate specialized suicide prevention support for LGBTQ youth callers through the 988 Suicide & Crisis Lifeline, California is taking action to improve behavioral health services and provide even more affirming and inclusive care. Through a new partnership with The Trevor Project, Governor Gavin Newsom and the California Health and Human Services Agency (CalHHS) will provide the state’s 988 crisis counselors enhanced competency training from experts, ensuring better attunement to the needs of LGBTQ youth, on top of the specific training they already receive.

This partnership builds on existing collaborations, like those under California’s Master Plan for Kids’ Mental Health, and reflects a shared commitment to evidence-based, LGBTQ+ affirming crisis care. Callers to 988 will continue to be met with the highest level of understanding, respect, and affirmation when they reach out for help.

“To every young person who identifies as LGBTQ+: You matter. You are not alone. California will continue to show up for you with care, with compassion, and with action,” said Kim Johnson, Secretary of CalHHS. “Through this partnership, California will continue to lead, providing enhanced support for these young people.”

“There could not be a more stark reminder of the moral bankruptcy of this Administration than cutting off suicide prevention resources for LGQBT youth. These are young people reaching out in their time of deepest crisis—andI’m proud of California’s work to partner with the Trevor Project to creatively address this need. No matter what this Administration throws at us, I know this state will always meet cruelty with kindness and stand up for what’s right,” said First Partner Jennifer Siebel Newsom.

California’s crisis call centers

Across California, twelve 988 call centers remain staffed around the clock by trained crisis counselors, ready to support anyone in behavioral health crises, including LGBTQ youth.

f you, a friend, or a loved one are in crisis or thinking about suicide, you can call, chat, or text 988 and be immediately connected to skilled counselors at all times. Specialized services for LGBTQ youth are also available via The Trevor Project hotline at 1‑866‑488‑7386, which continues as a state-endorsed access point...


Why this matters

LGBTQ youth are four times more likely to attempt suicide than their peers, and without affirming services, their risk increases dramatically. Since its launch in 2022, the 988 LGBTQ+ “Press 3” line connected more than 1.5 million in crisis.

How to get help 

Call, text or chat 988 at any time to be connected with trained crisis counselors.

Call 1-866-488-7386, text START to 678678, or chat at TheTrevorProject.org/GetHelp to reach Trevor Project specialists.

Visit CalHOPE for non-crisis peer and family support."

Trump administration ends 988 Lifeline's special service for LGBTQ+ young people; NPR, July 19, 2025


Rhitu Chatterjee , NPR; Trump administration ends 988 Lifeline's special service for LGBTQ+ young people


[Kip Currier: Like the suspension of PEPFAR medicines for HIV prevention throughout the Global South and the dismantling of USAID, terminating Lifeline's specialized services for at risk LGBTQ+ youth is another deeply cruel and indifferent policy decision by the Trump 2.0 administration that will result in losses of life. One has to wonder about the moral character of the individuals who are making these decisions.

California has introduced measures to provide these life-saving services for LGBTQ+ young persons, as reported in a July 16, 2025 press release:

Just weeks after the Trump administration announced that they would eliminate specialized suicide prevention support for LGBTQ youth callers through the 988 Suicide & Crisis Lifeline, California is taking action to improve behavioral health services and provide even more affirming and inclusive care. Through a new partnership with The Trevor Project, Governor Gavin Newsom and the California Health and Human Services Agency (CalHHS) will provide the state’s 988 crisis counselors enhanced competency training from experts, ensuring better attunement to the needs of LGBTQ youth, on top of the specific training they already receive.]

Where are the voices of, for example, Big Tech gay billionaires like Apple CEO Tim Cook, Palantir co-founder Peter Thiel, and OpenAI co-founder/CEO Sam Altman -- who are privileged and blessed to be in positions of leadership and influence -- to speak out against policy decisions like this? Or step up to the plate and donate a fraction of their wealth to support services like Lifeline?]


[Excerpt]

"The nation's Suicide and Crisis Lifeline, 988, shuttered the specialized services for LGBTQ+ youth this week. The move came a day after the Lifeline marked three years since its launch. During this period, it has fielded more than 16 million calls, texts and chats. Nearly 10% of those contacts have been from gay and transgender young people, according to government data.

"This is a tragic moment," says Mark Henson, vice president of government affairs and advocacy at The Trevor Project, one of several organizations that had contracts with the federal government to provide counseling services for this vulnerable population. The Trevor Project fields about half the LGBTQ+ contacts.

Data from the Youth Behavior Risk Survey, conducted by the Centers for Disease Control and Prevention, show that LGBTQ+ youth are more likely to experience persistent feelings of sadness and hopelessness compared to their peers, and more likely to attempt suicide.

When these young people contact 988, they have had the option to press 3 to be connected to a counselor specifically trained to support their unique mental health needs, which are associated with discrimination and violence they often face. This service is similar to what 988 offers to veterans, who are also at a higher risk of suicide, and can access support tailored for them by pressing 1 when they contact 988. That service will be retained as 988 enters its fourth year.

"Many LGBTQ+ youth who use these services didn't know they existed until they called 988 and found out there is someone on the other end of the line that knows what they've gone through and cares deeply for them," says Henson.

Government data show that demand for this service grew steadily since it launched, from about 2,000 contacts per month in September 2022 to nearly 70,000 in recent months."


Sunday, July 20, 2025

Clergy grapple with the ethics of using AI to write sermons; 90.5 WESA, July 17, 2025

Deena Prichep, 90.5 WESA; Clergy grapple with the ethics of using AI to write sermons

"AILSA CHANG, HOST:

On any given Sunday, churchgoers settle into pews and listen to a sermon. A member of the clergy uses text from the Bible and figures out what it has to say about our lives today, right? But how would you feel if you found out that sermon was written by artificial intelligence? Deena Prichep reports.

DEENA PRICHEP, BYLINE: Writing and delivering sermons, homiletics if you're in the biz, is not an easy job.

NAOMI SEASE CARRIKER: It's like a mini research paper. You have to prepare every week, and some weeks, life is just a lot.

PRICHEP: Naomi Sease Carriker is pastor of Messiah of the Mountains Lutheran Church in North Carolina and recently had one of those weeks, so she popped open ChatGPT.

CARRIKER: And boom, literally within not even 30 seconds, I had a 900-word sermon. And I read through it, and I was like, oh, my God, this is really good.

PRICHEP: But she also thought...

CARRIKER: This feels wrong.

PRICHEP: It's an ethical question clergy across the country are wrestling with. When it comes to something like homework, the goal is students learning. So using AI can get in the way of that. But the goal of a sermon is basically to tell a story that can break open the hearts of people to a holy message. So does it matter where that comes from? Some denominations have issued general guidelines urging thought and caution, but they don't really mention specifics, given that the technology is changing so quickly. So clergy are left to figure it out themselves. Naomi Sease Carriker decided not to preach that AI sermon, but she does use the tech to get her draft started or wrap up what she's written with a nice conclusion, and that feels OK."

Ice chief says he will continue to allow agents to wear masks during arrest raids; The Guardian, July 20, 2025

 , The Guardian; Ice chief says he will continue to allow agents to wear masks during arrest raids

"The head of US Immigration and Customs Enforcement (Ice) said on Sunday that he will continue allowing the controversial practice of his officers wearing masks over their faces during their arrest raids.

As Donald Trump has ramped up his unprecedented effort to deport immigrants around the country, Ice officers have become notorious for wearing masks to approach and detain people, often with force. Legal advocates and attorneys general have argued that it poses accountability issues and contributes to a climate of fear.

On Sunday, Todd Lyons, the agency’s acting director, was asked on CBS Face the Nation about imposters exploiting the practice by posing as immigration officers. “That’s one of our biggest concerns. And I’ve said it publicly before, I’m not a proponent of the masks,” Lyons said.

“However, if that’s a tool that the men and women of Ice to keep themselves and their family safe, then I will allow it.”

Lyons has previously defended the practice of mask-wearing, telling Fox News last week that “while I’m not a fan of the masks, I think we could do better, but we need to protect our agents and officers”, claiming concerns about doxxing (the public revealing of personal information such as home addresses), and declaring that assaults of immigration officers have increased by 830%."

The USDA wants states to hand over food stamp data by the end of July; NPR, July 19, 2025

, NPR ; The USDA wants states to hand over food stamp data by the end of July

"When Julliana Samson signed up for Supplemental Nutrition Assistance Program (SNAP) benefits to help afford food as she studied at the University of California, Berkeley, she had to turn in extensive, detailed personal information to the state to qualify.

Now she's worried about how that information could be used.

The U.S. Department of Agriculture has made an unprecedented demand to states to share the personal information of tens of millions of federal food assistance recipients by July 30, as a federal lawsuit seeks to postpone the data collection...

She and three other SNAP recipients, along with a privacy organization and an anti-hunger group, are challenging USDA's data demand in a federal lawsuit, arguing the agency has not followed protocols required by federal privacy laws. Late Thursday, they asked a federal judge to intervene to postpone the July 30 deadline and a hearing has been scheduled for July 23.

"I am worried my personal information will be used for things I never intended or consented to," Samson wrote recently as part of an ongoing public comment period for the USDA's plan. "I am also worried that the data will be used to remove benefits access from student activists who have views the administration does not agree with."

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

 , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

Judge Rules Class Action Suit Against Anthropic Can Proceed; Publishers Weekly, July 18, 2025

 Jim Milliot , Publishers Weekly; Judge Rules Class Action Suit Against Anthropic Can Proceed

"In a major victory for authors, U.S. District Judge William Alsup ruled July 17 that three writers suing Anthropic for copyright infringement can represent all other authors whose books the AI company allegedly pirated to train its AI model as part of a class action lawsuit.

In late June, Alsup of the Northern District of California, ruled in Bartz v. Anthropic that the AI company's training of its Claude LLMs on authors' works was "exceedingly transformative," and therefore protected by fair use. However, Alsup also determined that the company's practice of downloading pirated books from sites including Books3, Library Genesis, and Pirate Library Mirror (PiLiMi) to build a permanent digital library was not covered by fair use.

Alsup’s most recent ruling follows an amended complaint from the authors looking to certify classes of copyright owners in a “Pirated Books Class” and in a “Scanned Books Class.” In his decision, Alsup certified only a LibGen and PiLiMi Pirated Books Class, writing that “this class is limited to actual or beneficial owners of timely registered copyrights in ISBN/ASIN-bearing books downloaded by Anthropic from these two pirate libraries.”

Alsup stressed that “the class is not limited to authors or author-like entities,” explaining that “a key point is to cover everyone who owns the specific copyright interest in play, the right to make copies, either as the actual or as the beneficial owner.” Later in his decision, Alsup makes it clear who is covered by the ruling: “A beneficial owner...is someone like an author who receives royalties from any publisher’s revenues or recoveries from the right to make copies. Yes, the legal owner might be the publisher but the author has a definite stake in the royalties, so the author has standing to sue. And, each stands to benefit from the copyright enforcement at the core of our case however they then divide the benefit.”"

US authors suing Anthropic can band together in copyright class action, judge rules; Reuters, July 17, 2025

  , Reuters; US authors suing Anthropic can band together in copyright class action, judge rules

"A California federal judge ruled on Thursday that three authors suing artificial intelligence startup Anthropic for copyright infringement can represent writers nationwide whose books Anthropic allegedly pirated to train its AI system.

U.S. District Judge William Alsup said the authors can bring a class action on behalf of all U.S. writers whose works Anthropic allegedly downloaded from "pirate libraries" LibGen and PiLiMi to create a repository of millions of books in 2021 and 2022."

Friday, July 18, 2025

Trump administration to destroy nearly $10m of contraceptives for women overseas; The Guardian, July 18, 2025

 , The Guardian; Trump administration to destroy nearly $10m of contraceptives for women overseas

"The Trump administration has decided to destroy $9.7m worth of contraceptives rather than send them abroad to women in need.

A state department spokesperson confirmed that the decision had been made – a move that will cost US taxpayers $167,000. The contraceptives are primarily long-acting, such as IUDs and birth control implants, and were almost certainly intended for women in Africa, according to two senior congressional aides, one of whom visited a warehouse in Belgium that housed the contraceptives. It is not clear to the aides whether the destruction has already been carried out, but said they had been told that it was set to occur by the end of July."