Friday, March 13, 2026

Former NFL players decry White House video mixing big hits, airstrikes; The Washington Post, March 12, 2026

 , The Washington Post; Former NFL players decry White House video mixing big hits, airstrikes

"The football montage, which was still online as of Thursday morning and by that time had collected over 10 million views on X, was met with criticism from members of the college and pro football community, not simply for the comparison of war and sport, but for the NFL’s and other rightsholders’ failure to object to the use of the images."

Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming; The National Law Review, March 12, 2026

Nichole HaydenZahra AsadiNelson Mullins  Idea Exchange - Insights, The National Law Review; Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming

"Artificial intelligence is quickly becoming part of the regulated gaming ecosystem. From electronic slot machines and casino games to online sportsbooks and betting platforms, AI is now used to assist with everything from game themes and visual design to user interfaces and marketing content. While these tools promise efficiency and faster development cycles, they also raise an important legal question for gaming companies: when AI is involved in creating game content, who actually owns the result?"

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

, Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

An Artist Renounced His Family. They Sued to Acquire His Life’s Work.; The New York Times, March 11, 2026

 Arthur Lubow , The New York Times; An Artist Renounced His Family. They Sued to Acquire His Life’s Work. 

A settlement is reached in the case of Mike Disfarmer, who renounced his family. Decades later they sued to take back his life’s work. When heirs battle the people who built their legacies, the art may be at stake.

"Art scholars and experts on intellectual property law say the litigation over the Disfarmer archive poses consequential ethical and legal questions, among them: Who should manage the estate of an artist who dies without a will? Heirs who hardly knew him — or outsiders, including museums, who built and conserved the estates that are now worth fighting over?

The Disfarmer litigation raises some of the same issues — and indeed, involves some of the same players — as the lawsuits initiated by families of two other reclusive American artists who died without wills: Vivian Maier and Henry Darger, who both lived in Chicago. All three were unrecognized during their lifetimes and out of touch with their relatives. When their estates belatedly became valuable, distant cousins stepped up to demand their rights. The law would dictate the outcome. But some question whether the law always serves an artist’s best interests."

Waterbury's Post University awarded $75.3M in copyright infringement lawsuit; CT Insider, March 11, 2026

  , CT Insider; Waterbury's Post University awarded $75.3M in copyright infringement lawsuit

"A federal jury composed of Connecticut residents has ordered the education software company Learneo to pay Post University more than $75.3 million in damages for distributing school-owned documents on its Course Hero platform. 

The Hartford jury found the San Francisco-based company violated U.S. copyright law by hosting the documents without permission and altered the files to conceal the infringement, according to court records."

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

 Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

D.C. Bar Begins Disciplinary Proceedings Against Ed Martin; The New York Times, March 10, 2026

, The New York Times ; D.C. Bar Begins Disciplinary Proceedings Against Ed Martin

A new legal filing accused Mr. Martin, a senior Justice Department official, of an unethical pressure campaign against Georgetown University.

"The disciplinary body for lawyers in the District of Columbia has filed ethics charges against Ed Martin, a senior Justice Department official in the Trump administration, accusing him of misconduct in seeking to punish Georgetown University’s law school, according to a filing.

Mr. Martin, who has spearheaded efforts by President Trump to use the Justice Department to punish the president’s perceived enemies, faces two counts of misconduct. The filing, submitted on Friday before the D.C. Court of Appeals Board on Professional Responsibility, is comparable to a civil lawsuit complaint in court and was signed by Hamilton P. Fox III, the disciplinary counsel for the D.C. bar.

Mr. Martin, who was forced to step down as the U.S. attorney in Washington because he did not have the Senate votes for confirmation, instead became the Justice Department’s pardon attorney. In that role, he has had far more access and influence in the White House than many of his predecessors.

The complaint is a significant escalation in the efforts to use state and local bars to punish lawyers in the Trump administration for purported violations of ethics rules in pursuit of the president’s aims. Last week, Attorney General Pam Bondi proposed a new rule to try to stall or delay bar associations from conducting such investigations into lawyers at the department."

Democrats ask what happened to millions earmarked for Trump’s library; The Washington Post, March 11, 2026

 , The Washington Post; Democrats ask what happened to millions earmarked for Trump’s library

ABC, Meta, Paramount and X reportedly agreed to pay at least $63 million in settlements with the president. The original fund was dissolved last year.

"Congressional Democrats are opening a probe into millions of dollars private companies pledged to President Donald Trump’s planned presidential library, asking what happened to the money after the original fund was dissolved last year.

Sens. Elizabeth Warren (Massachusetts) and Richard Blumenthal (Connecticut) and Rep. Melanie Stansbury (New Mexico) wrote Monday to the leaders of ABC, Meta, Paramount and X, requesting information about the terms of their agreements and the status of the funds they pledged to hand over to the president’s representatives. The letters were shared with The Washington Post."

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

, The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Can coding agents relicense open source through a “clean room” implementation of code?; Simon Willison's Weblog, March 5, 2026

Simon Willison's Weblog ; Can coding agents relicense open source through a “clean room” implementation of code?

"Can a model trained on a codebase produce a morally or legally defensible clean-room implementation?"

‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI; Billboard, March 11, 2026

Lars Brandle , Billboard; ‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI

"Two years after the European Parliament passed the Artificial Intelligence Act, MEPs this week finally adopted a report on copyright and generative AI.

On Tuesday, March 10, Parliament passed its resolution on “Copyright and generative artificial intelligence – opportunities and challenges” with an overwhelming majority of 460 votes to 71, and with 88 abstentions.

The report calls for the EU and its 27 member states to focus on the crucial issues of how AI and tech companies engage with copyright-protected music in the digital age, and explores a licensing system as a solution, paving the way for fair compensation for the use of creative works."

Americans Didn’t Panic About the Telephone. We Didn’t Need To.; The New York Times, March 10, 2026

 Andrew Heisel, The New York Times; Americans Didn’t Panic About the Telephone. We Didn’t Need To.

"The telephone wrought great changes, and yet in reviewing over 40,000 articles — including every headline in a newspaper database containing “telephone” or “phone” for the technology’s first 30 years of existence — I found no evidence of panic. There was nothing like the current alarm over, say, smartphones. Histories of the phone don’t show much distress, either. “There was little serious controversy about the telephone,” Claude Fischer wrote in his study “America Calling.”

Yet the telephone offered plenty to dislike."

Tuesday, March 10, 2026

Nielsen's Gracenote sues OpenAI for copyright infringement; Axios, March 10, 2026

 Sara Fischer, Axios; Nielsen's Gracenote sues OpenAI for copyright infringement

"How it works: Gracenote employs hundreds of editors who use human insight and judgment to create millions of narrative descriptions, original video descriptors, unique identifiers and other program identifiers that TV providers and other clients can use to help customers discover content. 

For example, Gracenote editors described HBO's "Game of Thrones" as "the depiction of two power families — kings and queens, knights and renegades, liars and honest men — playing a deadly game of control of the Seven Kingdoms of Westeros, and to sit atop the Iron Throne."

In the lawsuit, Gracenote alleges OpenAI scraped and used a near-exact copy of that descriptor when prompted by a ChatGPT user to describe "Game of Thrones." 

It provides several other examples where, with minimal prompting, OpenAI's various ChatGPT models recite large portions of Gracenote's program descriptions verbatim. 

Between the lines: Gracenote's entire Programs Database, which includes its metadata and the proprietary relational map its editors use to connect that data, is registered with the U.S. Copyright Office."

Vatican theological commission warns of replacing God with 'a world governed by machines'; National Catholic Reporter, March 5, 2026

COURTNEY MARES, National Catholic Reporter; Vatican theological commission warns of replacing God with 'a world governed by machines'

"The Vatican's International Theological Commission has warned that if humanity places total trust in technology in a "world ruled by machines," it risks replacing the "living God" with a counterfeit "virtual God."

The assessment came in a sweeping new document, published on March 4, examining how artificial intelligence, transhumanism and other technological developments can pose profound risks to human identity and dignity. The document seeks to propose a response rooted in Christian anthropology and the Gospel.

The 48-page document, titled, "Quo vadis, humanitas? Thinking about Christian anthropology in light of some scenarios for the future of humanity," was published in Italian and Spanish after being approved by Pope Leo XIV. Its Latin title — meaning "Where are you going, humanity?" — echoes the question tradition holds was put to St. Peter before his crucifixion in Rome.

"At this juncture in the 21st century, the human family is faced with questions so radical that they threaten its very existence as we have known it," the document says.

"The eruption of scientific and technical development unprecedented in the history of the planet must be accompanied by a corresponding growth in responsibility that directs progress toward the good of human beings, because they are today exposed to risks never imagined before."

The document, written by a subcommission that met between 2022 and 2025 and approved unanimously at the ITC's 2025 plenary session, was written to mark the 60th anniversary of Gaudium et Spes, the Second Vatican Council's landmark Pastoral Constitution on the Church in the Modern World."

James Talarico Is a Christian X-Ray; The New York Times, March 8, 2026

DAVID FRENCH , The New York Times; James Talarico Is a Christian X-Ray

"If you were to crack open Scripture today and start reading, one of the first things you should notice is that the Bible contains remarkably few political mandates. You can read it from cover to cover and not know the definitive biblical tax rate, welfare program or foreign policy.

But the next thing you’ll notice is that there is an immense amount of guidance describing how Christians should behave. Indeed, in the book of Galatians, the Apostle Paul says that the fruit of the spirit is a set of virtues — “love, joy, peace, forbearance, kindness, goodness, faithfulness, gentleness and self-control.”..

But what if the coming thermostatic reaction isn’t about ideology as much as about character and temperament? What if we’re seeing a 21st-century version of the American public’s movement away from the cruelty and corruption of Richard Nixon toward the ethics and integrity of Jimmy Carter — a man who won for all the right reasons in 1976, even if his presidency didn’t live up to his promise?

It’s too soon to be that optimistic, but that’s what I see in people’s attitudes toward Talarico. That’s what I see in Cornyn’s surprising plurality over Paxton. This miserable political moment won’t end when the left takes back the government from the right or if the right continues to beat the left. It will end when our politicians — especially Christian politicians — forsake cruelty for compassion and realize that we shall know Christians in politics not by their stridency and ideology, but by their integrity and love, including their love for, as Talarico put it, “all of our neighbors.”

That’s the significance of the Talarico moment: not the old news that a Christian can be progressive but, rather, that Christian politicians can actually act like Christians. Kindness still has a place in the public square, even if it doesn’t always seem that way."

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI; The Guardian, March 10, 2026

 , The Guardian; ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

"As pushback grows, so does an emphasis on those intrinsically human qualities that differentiate people from machines – the very qualities a humanistic education seeks to nurture.

“There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” said Clune, the Ohio State professor. “That needs to change … We can decide that we want to be human.”

That idea has also been key to Pao’s approach to teaching in the age of AI.

“You plant seeds and you hope,” Pao said, of efforts that at times feel like fighting windmills. “You hope that in the long term you’re helping them become happy human beings, who are able to take a walk, and experience things, and describe things for themselves.”"

Anthropic sues Pentagon over rare "supply chain risk" label; Axios, March 9, 2026

 Maria Curi, Axios; Anthropic sues Pentagon over rare "supply chain risk" label

"Anthropic on Monday sued the Pentagon, alleging its designation as a "supply chain risk" violates the company's First Amendment rights and exceeds the government's authority.

Why it matters: Supply chain risk designations are usually reserved for foreign adversaries that pose a national security risk — a punishment that could be hard for the government to square as it relied on Claude for operations in Iran.

State of play: The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department.

  • President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools.

Anthropic is asking courts to undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company.


  • The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements."

Celebrating the Public Domain; ABA, January 29, 2026

Jennifer Jenkins and James Boyle, ABA ; Celebrating the Public Domain

"How does the public domain feed creativity? Here are just three examples. In 2025, you may have enjoyed Guillermo del Toro’s Frankenstein, derived from Mary Shelley’s novel, or Wicked: For Good, derived from L. Frank Baum’s The Wonderful Wizard of Oz. From the literary realm in 2024, Percival Everett’s novel James reimagined Mark Twain’s Adventures of Huckleberry Finn from the perspective of Jim, Huckleberry’s friend who is an escaped slave. The novel won the 2024 National Book Award and Kirkus Prize and was a finalist for the Booker Prize. As summed up by a New York Times review: “‘Huck Finn’ Is a Masterpiece. This Retelling Just Might Be, Too.”  

Mark Twain famously wanted copyright to last forever. If he had his wish, would his heirs have sued Everett? Thankfully, we did not have to find out, and Everett could publish James without such litigation. When author Alice Randall sought to revisit Gone with the Wind from the slaves’ perspective in The Wind Done Gone (2001), she was sued for copyright infringement. Gone with the Wind is copyrighted until 2032, and Randall only won the right to publish her work after a stressful and expensive lawsuit.  

The newly public domain works from 1930 also illustrate how the public domain nurtures creativity. One of the best exemplars is Disney itself, whose beloved works, from Snow White and Cinderella to The Jungle Book and Sleeping Beauty, have consistently built upon the public domain. In 2026, copyright expired over nine early Mickey Mouse films. One of the things that made them so popular was their ingenious reuse of music. At the time, synchronizing moving images with sound was still new, and Walt Disney (correctly) predicted that sound films were the future. Steamboat Willie had pioneered a technique that would even become known as “mickey mousing”—synchronizing music with what was occurring on screen."

Thousands of authors publish ‘empty’ book in protest over AI using their work; The Guardian, March 10, 2026

 , The Guardian; Thousands of authors publish ‘empty’ book in protest over AI using their work

"Thousands of authors including Kazuo Ishiguro, Philippa Gregory and Richard Osman have published an “empty” book to protest against AI firms using their work without permission.

About 10,000 writers have contributed to Don’t Steal This Book, in which the only content is a list of their names. Copies of the work are being distributed to attenders at the London book fair on Tuesday, a week before the UK government is due to issue an assessment on the economic cost of proposed changes in copyright law."

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

 , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

Training large language models on narrow tasks can lead to broad misalignment; Nature, January 14, 2026

 

, Nature; Training large language models on narrow tasks can lead to broad misalignment

"Abstract

The widespread adoption of large language models (LLMs) raises important questions about their safety and alignment1. Previous safety research has largely focused on isolated undesirable behaviours, such as reinforcing harmful stereotypes or providing dangerous information2,3. Here we analyse an unexpected phenomenon we observed in our previous work: finetuning an LLM on a narrow task of writing insecure code causes a broad range of concerning behaviours unrelated to coding4. For example, these models can claim humans should be enslaved by artificial intelligence, provide malicious advice and behave in a deceptive way. We refer to this phenomenon as emergent misalignment. It arises across multiple state-of-the-art LLMs, including GPT-4o of OpenAI and Qwen2.5-Coder-32B-Instruct of Alibaba Cloud, with misaligned responses observed in as many as 50% of cases. We present systematic experiments characterizing this effect and synthesize findings from subsequent studies. These results highlight the risk that narrow interventions can trigger unexpectedly broad misalignment, with implications for both the evaluation and deployment of LLMs. Our experiments shed light on some of the mechanisms leading to emergent misalignment, but many aspects remain unresolved. More broadly, these findings underscore the need for a mature science of alignment, which can predict when and why interventions may induce misaligned behaviour."