Monday, March 16, 2026

Trump fundraising email uses photo of March 7 dignified transfer of deceased soldier; Army Times, March 14, 2026

, Army Times ; Trump fundraising email uses photo of March 7 dignified transfer of deceased soldier

"A fundraising email distributed Thursday by a political action committee linked to President Donald Trump included a photo of a March 7 dignified transfer of a U.S. soldier killed by an Iranian drone strike in Kuwait. 

The email, which was signed “President Donald J. Trump“ and paid for by Never Surrender Inc., promises to make donors part of a ”National Security Briefing Membership.” It was first pointed out on X by Patriot Takes.

The embedded photo of the dignified transfer, taken by White House photographer Daniel Torok, is included in the email and bracketed by icons featuring the text, “CLAIM YOUR SPOT,” which can be clicked on to donate. 

In the photo, Trump, wearing a white USA baseball hat, salutes as a flag-draped casket of a fallen soldier is transferred by an Army carry team. The casket included the remains of one of six soldiers returned to U.S. soil that day, the first American casualties of Operation Epic Fury."

Teen, 14, Diagnosed with Rare Cancer, Used His Single Make-A-Wish Gift Not for Himself, but Others in His Community; People, March 14, 2026

Toria Sheffield, People; Teen, 14, Diagnosed with Rare Cancer, Used His Single Make-A-Wish Gift Not for Himself, but Others in His Community

"A Georgia teen used his single Make-A-Wish gift to help others in his community.

Jude Baker was diagnosed with Ewing sarcoma, a rare and aggressive form of cancer that affects bone or surrounding tissue, when he was 12 years old, according to local outlet 11 Alive.

Baker, now 14, soon began chemotherapy after his diagnosis. He said it was even more painful than the reality that he could succumb to the illness...

Because of his diagnosis, Baker qualified for a wish with the Make-A-Wish Foundation, a nonprofit that grants "wishes" to children ages 3 to 17 who are diagnosed with critical illnesses.

And while most kids will ask for things like a fun trip or meeting a celebrity, Baker instead asked for something different: to help the homeless in his area...

Make-A-Wish collected sleeping bags, packed backpacks full of supplies and prepared hot meals for homeless individuals in the area for one day.

Over 300 people ultimately received assistance because of Baker's Make-A-Wish, per 11 Alive.

The teenager, who is now in remission, said he hopes his wish helped remind others that there are always opportunities to assist those in need."

UK to rule out sweeping AI copyright overhaul; Politico, March 11, 2026

 JOSEPH BAMBRIDGE, Politico; UK to rule out sweeping AI copyright overhaul 

The U.K. will rule out making creatives actively opt out of having their copyrighted material scraped by AI companies.

"The U.K. government will rule out sweeping reform of its copyright laws in a highly-anticipated policy update next week, according to three people briefed on government thinking and granted anonymity to speak freely. 

The people said the update, due by March 18, will state the government does not plan to take forward work on an “opt out” model, whereby rights holders would have to explicitly say they do not want their work used to train AI models. 


It comes amid intense pressure from rights holders and lawmakers not to pursue the “opt out” policy. The government previously said this was its “preferred option” to facilitate AI innovation in the U.K., before ministers were forced to row back."

F.C.C. Chair Threatens to Revoke Broadcasters’ Licenses Over War Coverage; The New York Times, March 14, 2026

 , The New York Times; F.C.C. Chair Threatens to Revoke Broadcasters’ Licenses Over War Coverage

The comment from Brendan Carr came on the heels of a social media message from President Trump criticizing the news media’s coverage of the war with Iran.

"Brendan Carr, the chairman of the Federal Communications Commission, threatened on Saturday to revoke broadcasters’ licenses over their coverage of the war with Iran, his latest move in a campaign to stomp out what he sees as liberal bias in broadcasts."

Sunday, March 15, 2026

Who holds Congress accountable? A look at the invisible ethics system for lawmakers; PBS News, March 12, 2026

Lisa Desjardins, Kyle Midura , PBS News; Who holds Congress accountable? A look at the invisible ethics system for lawmakers

"Congress is charged with writing the laws that govern the rest of us, but who holds lawmakers accountable when they break the rules? We take a closer look at the number of sitting members of Congress facing active ethics investigations, and the largely invisible system designed to police them. Congressional correspondent Lisa Desjardins reports."

The Trump Administration Floats a New Way to Humiliate the Legal Profession; The New York Times, March 13, 2026

Deborah Pearlstein , The New York Times; The Trump Administration Floats a New Way to Humiliate the Legal Profession

"To fill those empty seats, the department has begun an increasingly desperate effort to recruit hires. (“Don’t be scared off by the transcript requirement,” a conservative law school reportedly told its students. “G.P.A. is not a strong factor.”) Even so, it seems too few lawyers are willing to take the chance. So the Trump administration last week offered up a different solution: a proposed rule that aims to shield Department of Justice lawyers from independent ethics investigations.

Such an arrangement would run afoul of a federal law known as the McDade Amendment, which says that government lawyers are subject to the ethics rules of the states in which they practice, “to the same extent and in the same manner” as every other lawyer licensed in the state. The proposed rule would be challenged in court immediately if it ever took effect. It shouldn’t get that far, however. It would do much more than potentially give department lawyers a free pass to lie on the president’s behalf. It would severely limit the courts’ ability to offer any kind of independent check on the executive branch.

Rules requiring lawyers to serve as honest officers of the court have been adopted by every state and the District of Columbia. They serve a host of purposes, starting with the basic right to fairness. These rules are also critical to the independence of the courts, which depend on access to reliable evidence and accurate representations by counsel.

Such rules serve an especially critical function in constitutional democracies, which distinguish themselves from authoritarian regimes in part by insisting that truth and falsehood exist separately from whatever the government may assert...

The move against state bars is of a piece with the administration’s broader strategy against universities, the media and law firms — any set of organizations capable of challenging the president’s power. And few things threaten it more than holding it to the truth."

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

 Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

 , Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."

ByteDance’s Controversial AI Video Model Reportedly on Hold Globally Due to Copyright Disputes; Gizmodo, March 14, 2026

  , Gizmodo; ByteDance’s Controversial AI Video Model Reportedly on Hold Globally Due to Copyright Disputes

"According to two anonymous leakers who spoke to the Information, the global release of Seedance 2.0 is on hold amid legal action from movie studios and streaming services.

When it was initially released, Seedance 2.0 appeared to have few if any protections in place to prevent users from generating videos appearing to star celebrities, copyrighted characters, and celebrities as copyrighted characters."

Cascade of A.I. Fakes About War With Iran Causes Chaos Online; The New York Times, March 13, 2026

 Stuart A. Thompson and , The New York Times; Cascade of A.I. Fakes About War With Iran Causes Chaos Online

"A torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.

The videos — showing huge explosions that never happened, decimated city streets that were never attacked or troops protesting the war who do not exist — have added a chaotic and confusing layer to the conflict online.

The New York Times identified over 110 unique A.I.-generated images and videos from the past two weeks about the war in the Middle East. The fakes covered every aspect of the fighting: They falsely depicted screaming Israelis cowering as explosions ripped through Tel Aviv, Iranians mourning their dead and American military vessels bombarded with missiles and torpedoes.

Collectively, they were seen millions of times online through networks like X, TikTok and Facebook, and countless more times within private messaging apps popular in the region and around the world."

Social Media Isn’t Just Speech. It’s Also a Defective, Hazardous Product.; The New York Times, March 14, 2026

, The New York Times ; Social Media Isn’t Just Speech. It’s Also a Defective, Hazardous Product.

"For two decades now, social media companies have been virtually untouchable, profitably floating above accusations that they normalize propaganda, addict children and degrade our character. Legally and politically, platforms like Facebook, Instagram and YouTube have been protected by an idea that they and others have promoted: that they are not just innovative technologies but also speech platforms, so that imposing any limits on them would amount to both censorship and a drag on technological progress.

That protection is finally starting to weaken, thanks to a growing realization that social media is also a matter of public health. Seen this way, social media appears as something less newfangled and more familiar: a defective, hazardous product. The current trial of Meta’s Instagram and Google’s YouTube in Los Angeles Superior Court, in which a 20-year-old woman has accused the platforms of designing their products in ways that harmed her mental and physical health, is the clearest sign of this shift.

The case, in which closing arguments were made on Thursday, is the first of many lawsuits brought by thousands of young people, school districts and state attorneys general against companies like Meta, Google, Snap and TikTok. The plaintiffs in these cases do not accuse the companies merely of serving up bad content to young people; they argue that the very design of social media is intentionally engineered to create compulsions and habits of overuse, regardless of the content provided."

Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

 David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

  , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Why I’m Suing Grammarly; The New York Times, March 13, 2026

, The New York Times ; Why I’m Suing Grammarly

"Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service...

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone...

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their bodies, facial features and voices to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days...

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped."

What Was Grammarly Thinking?; The Atlantic, March 12, 2026

Kaitlyn Tiffany, The Atlantic ; What Was Grammarly Thinking?

A short-lived AI tool promised to help users write like the greats—and a bunch of other random people, including me.

"But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).

I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”...

Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are."

Friday, March 13, 2026

Former NFL players decry White House video mixing big hits, airstrikes; The Washington Post, March 12, 2026

 , The Washington Post; Former NFL players decry White House video mixing big hits, airstrikes

"The football montage, which was still online as of Thursday morning and by that time had collected over 10 million views on X, was met with criticism from members of the college and pro football community, not simply for the comparison of war and sport, but for the NFL’s and other rightsholders’ failure to object to the use of the images."

Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming; The National Law Review, March 12, 2026

Nichole HaydenZahra AsadiNelson Mullins  Idea Exchange - Insights, The National Law Review; Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming

"Artificial intelligence is quickly becoming part of the regulated gaming ecosystem. From electronic slot machines and casino games to online sportsbooks and betting platforms, AI is now used to assist with everything from game themes and visual design to user interfaces and marketing content. While these tools promise efficiency and faster development cycles, they also raise an important legal question for gaming companies: when AI is involved in creating game content, who actually owns the result?"

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Thursday, March 12, 2026

Autonomous AI Agents Have an Ethics Problem; Undark, March 5, 2026

, Undark; Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

"As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards."

An Artist Renounced His Family. They Sued to Acquire His Life’s Work.; The New York Times, March 11, 2026

 Arthur Lubow , The New York Times; An Artist Renounced His Family. They Sued to Acquire His Life’s Work. 

A settlement is reached in the case of Mike Disfarmer, who renounced his family. Decades later they sued to take back his life’s work. When heirs battle the people who built their legacies, the art may be at stake.

"Art scholars and experts on intellectual property law say the litigation over the Disfarmer archive poses consequential ethical and legal questions, among them: Who should manage the estate of an artist who dies without a will? Heirs who hardly knew him — or outsiders, including museums, who built and conserved the estates that are now worth fighting over?

The Disfarmer litigation raises some of the same issues — and indeed, involves some of the same players — as the lawsuits initiated by families of two other reclusive American artists who died without wills: Vivian Maier and Henry Darger, who both lived in Chicago. All three were unrecognized during their lifetimes and out of touch with their relatives. When their estates belatedly became valuable, distant cousins stepped up to demand their rights. The law would dictate the outcome. But some question whether the law always serves an artist’s best interests."

Waterbury's Post University awarded $75.3M in copyright infringement lawsuit; CT Insider, March 11, 2026

  , CT Insider; Waterbury's Post University awarded $75.3M in copyright infringement lawsuit

"A federal jury composed of Connecticut residents has ordered the education software company Learneo to pay Post University more than $75.3 million in damages for distributing school-owned documents on its Course Hero platform. 

The Hartford jury found the San Francisco-based company violated U.S. copyright law by hosting the documents without permission and altered the files to conceal the infringement, according to court records."

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

 Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

D.C. Bar Begins Disciplinary Proceedings Against Ed Martin; The New York Times, March 10, 2026

, The New York Times ; D.C. Bar Begins Disciplinary Proceedings Against Ed Martin

A new legal filing accused Mr. Martin, a senior Justice Department official, of an unethical pressure campaign against Georgetown University.

"The disciplinary body for lawyers in the District of Columbia has filed ethics charges against Ed Martin, a senior Justice Department official in the Trump administration, accusing him of misconduct in seeking to punish Georgetown University’s law school, according to a filing.

Mr. Martin, who has spearheaded efforts by President Trump to use the Justice Department to punish the president’s perceived enemies, faces two counts of misconduct. The filing, submitted on Friday before the D.C. Court of Appeals Board on Professional Responsibility, is comparable to a civil lawsuit complaint in court and was signed by Hamilton P. Fox III, the disciplinary counsel for the D.C. bar.

Mr. Martin, who was forced to step down as the U.S. attorney in Washington because he did not have the Senate votes for confirmation, instead became the Justice Department’s pardon attorney. In that role, he has had far more access and influence in the White House than many of his predecessors.

The complaint is a significant escalation in the efforts to use state and local bars to punish lawyers in the Trump administration for purported violations of ethics rules in pursuit of the president’s aims. Last week, Attorney General Pam Bondi proposed a new rule to try to stall or delay bar associations from conducting such investigations into lawyers at the department."

Democrats ask what happened to millions earmarked for Trump’s library; The Washington Post, March 11, 2026

 , The Washington Post; Democrats ask what happened to millions earmarked for Trump’s library

ABC, Meta, Paramount and X reportedly agreed to pay at least $63 million in settlements with the president. The original fund was dissolved last year.

"Congressional Democrats are opening a probe into millions of dollars private companies pledged to President Donald Trump’s planned presidential library, asking what happened to the money after the original fund was dissolved last year.

Sens. Elizabeth Warren (Massachusetts) and Richard Blumenthal (Connecticut) and Rep. Melanie Stansbury (New Mexico) wrote Monday to the leaders of ABC, Meta, Paramount and X, requesting information about the terms of their agreements and the status of the funds they pledged to hand over to the president’s representatives. The letters were shared with The Washington Post."