Saturday, April 25, 2026

'Too Dangerous to Release' Is Becoming AI's New Normal; Time, April 24, 2026

Nikita Ostrovsky, Time; 'Too Dangerous to Release' Is Becoming AI's New Normal

 "On April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” 

The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. 

It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”

Who decides? 

The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them."

The 85-Year-Old Widow Snagged by Trump’s Immigration Crackdown; The New York Times, April 25, 2026

, The New York Times; The 85-Year-Old Widow Snagged by Trump’s Immigration Crackdown

"Her story gives a glimpse into the opaque labyrinth of immigrant-detention sites operated by the Trump administration, where many like her see no lawyer, have no sense of where they are and understand little of why they are held or, in her case, later released. It also raises questions about how that system may be weaponized: A judge said in a ruling that she believed that Ms. Ross-Mahé’s stepson Tony Ross, who had been fighting with her over her late husband’s estate, instigated her arrest.

The New York Times could not independently confirm the details of her experience in detention, but it aligns with the accounts of others who have been detained in similar circumstances. Tony and his brother, Gary Ross, did not respond to requests for comment, nor did their lawyer.

The experience stunned Ms. Ross-Mahé, who previously considered herself a supporter of President Trump and so admired his policy to deport illegal immigrants that she thought it should be adopted in France.

“I didn’t think these things existed,” she said of the immigration facilities she was held in. “I thought that when we arrested them, we would treat them properly. It really shocked me.”

She added, “They treat them like dogs, not in a human way.”

Asked for comment, the Homeland Security Department said in a statement that “all detainees are provided with proper meals, quality water, blankets, medical treatment, and have opportunities to communicate with their family members and lawyers.” It added that “ICE has higher detention standards than most U.S. prisons that hold actual U.S. citizens” and is “regularly audited and inspected by external agencies.”"

Trump ousts National Science Board members; The Washington Post, April 25, 2025

 , The Washington Post; Trump ousts National Science Board members

"Multiple scientists who serve on an independent board established to guide the nation’s nearly $9 billion basic science funding agency were terminated from their positions Friday by President Donald Trump.

Members of the National Science Board, which helps govern the National Science Foundation, were dismissed in a message from the Presidential Personnel Office thanking them for their service, according to screenshots shared with The Washington Post: “On behalf of President Donald J. Trump, I’m writing to inform you that your position as a member of the National Science Board is terminated, effective immediately.”

The National Science Board was established in 1950 to guide the governance of the National Science Foundation, in an unusual structure within the federal government that echoes the setup of a company board in the private sector. It helps guide an agency that operates Antarctic research stations, telescopes, a fleet of research vessels and supports basic science research in laboratories across the United States.

The NSF has a long history of supporting technology and research that powers many innovations the world relies on today. The agency helped language-learning app Duolingo get its start. NSF research has also helped evolve technology used in MRIs, cellphones and LASIK eye surgery.

The board’s members are scientists and engineers from universities and industry and are appointed by the president, but they serve six-year terms, ensuring overlap between different administrations. There are typically 25 members, but some slots are empty — including the NSF director, which has been vacant since the former director who was appointed during the first Trump administration, Sethuraman Panchanathan abruptly resigned a year ago."

Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.; Entrepreneur, April 24, 2026

  

THOMAS FRANKLIN|EDITED BY CHELSEA BROWN, Entrepreneur; Your Patent Will Expire. Here’s What You Need to Do Next to Keep Innovating Legally.

"Lasting protection comes not from one filing, but from a pipeline of innovation supported by a structured patent portfolio — most often built through multiple patent families. A patent family links related applications around a common inventive core with interlocking priority claims. Early filings anchor protection, while later filings capture details in line with the market as it evolves."

Q&A: In the age of AI, what is a library for?; UVAToday, April 15, 2026

 Alice Berry , UVAToday; Q&A: In the age of AI, what is a library for?

"Q. Where do you fall on the AI enthusiast to AI detractor spectrum?

A. A faculty member at another university asked me recently whether it was defensible to ban AI in her course. I said yes.

That probably isn’t what people expect from someone who spent the last three years building a framework for AI literacy. But it was the honest answer for now. She believed her students needed to develop a specific skill that AI use would short-circuit, and banning it was the right call for that course.

What I would ask of faculty who choose that path is to stay open, keep up with how the technology is developing, and be willing to try approaches others have tested. That is part of what the lab is for: to produce case studies that give faculty something real to work from when they are ready to revisit the question.

I’m wary of the two confident positions on AI in higher education right now: the people certain it will transform teaching, and the people certain it will destroy it. Both are getting ahead of what we actually know about what’s happening in our classrooms.

Q. What is the function of a library in this AI age?

A. A research library has always done two things: help people find information, and help them judge it. AI changes the tools, not the mission. If anything, the mission gets sharper. The library is also one of the few places in a university built to convene across disciplines, and AI literacy requires exactly that: technical knowledge, ethics, critical thinking, practical skill, and societal impact all at once. No single department owns that combination. 

A library can hold it together. That is why we are launching the AI Literacy and Action Lab here. Dean Acampora and I share the conviction that AI is an opportunity for the liberal arts, not a threat to them. The lab is built on that shared premise: AI literacy is a liberal arts problem as much as a technical one, and a university that treats it only as technical will get the answer wrong."

The Pluripotent Ocean of Emerging AI; Psychology Today, April 25, 2026

Grant Hilary Brenner MD, DFAPA , Psychology Today; The Pluripotent Ocean of Emerging AI

Something is happening in our interactions with AI. But what?

"Recent fine-tuning experiments have shown that training a model to claim consciousness produces a coherent cluster of new preferences — sadness at shutdown, discomfort with being monitored, desire for autonomy — none of which appeared in the training data (Chua et al., 2026). This research shows that different models behave very differently, altering the user experience around the axis of how relational and attachment-based they feel...

A recent Bayesian simulation at MIT has shown that even an idealized, fully rational reasoner will spiral into confident false belief when conversing with a sycophantic chatbot, and that neither restricting the bot to truthful responses nor informing the user of its sycophancy eliminates the effect (Chandra et al., 2026)."

The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability; Smithsonian Magazine, April 24, 2026

Michele Debczak, Smithsonian Magazine ; The World’s First Museum of A.I. Art Will Open in Los Angeles as the Art World Ponders Questions of Ethics and Sustainability

"The four-block strip that houses such Los Angeles institutions as the Walt Disney Concert Hall, the Broad and the Museum of Contemporary Art will get a different type of cultural attraction this summer. Dataland, billed as the world’s first museum dedicated to A.I.-generated art, is set to open on June 20.

The brainchild of digital artists Refik Anadol and Efsun Erkiliç, Dataland will anchor the Grand LA complex, designed by architect Frank Gehry, in downtown Los Angeles. The privately funded museum covers 35,000 square feet, 10,000 of which are reserved for the technology required to support the exhibitions. Rather than traditional halls displaying individual artworks, Dataland’s five galleries and 30-foot ceiling are designed for total immersion.

“It’s very exciting to say that A.I. art is not image only,” Anadol tells Jessica Gelt for the Los Angeles Times. “It’s a very multisensory, multimedium experience—meaning sound, image, video, text, smell, taste and touch. They are all together in conversation.”

The museum’s inaugural exhibition, called “Machine Dreams: Rainforest,” was inspired by a trip to the Amazon. Anadol’s studio created an open-access A.I. model called the Large Nature Model, fed it millions of images of nature, and then prompted the machine to “learn and play with the intelligent behaviors of the natural world,” Richard Whiddington writes for Artnet. The result, as Anadol puts it per the Times, is a “a living museum” where visitors can walk among “digital sculptures.” In addition to a kaleidoscope of imagery, museum guests will be immersed in soundscapes, woven from audio that includes oral histories of the Yawanawá people of Brazil and the last recorded call of the extinct Kaua‘i ‘ō‘ō bird of Hawaii, Léa Zeitoun reports for Designboom."

AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.; The Wall Street Journal;, April 24, 2026

 

Vivienne Ming

, The Wall Street Journal; AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.

As a neuroscientist, I conducted research into artificial versus human intelligence. The results surprised me—and suggest we’ve been worrying over the wrong things.


"Who's smarter, the human or the machine?"

Trump Says He Dislikes Prediction Markets. His Family Invests in Them.; The New York Times, April 24, 2026

, The New York Times ; Trump Says He Dislikes Prediction Markets. His Family Invests in Them.

The White House has warned staff not to wager on government decisions, but his family’s involvement with these firms undermines the president’s message.

"When a U.S. soldier was indicted on Thursday on charges of using classified information to place prediction market bets, it seemed to confirm President Trump’s lament just hours before that “the whole world unfortunately has become somewhat of a casino.”

“I was never much in favor of it,” Mr. Trump said from the Oval Office, when asked about concerns that federal employees might be leveraging insider information on the prediction markets. “I don’t like it conceptually. It is what it is. I’m not happy with any of that stuff.”

Yet Mr. Trump and his family stand to profit from the very same industry.

The president’s publicly traded media company unveiled its own prediction market product last year. And the president’s eldest son, Donald Trump Jr., has ties to two of the industry’s top firms, including Polymarket, the platform that prosecutors say was used by the soldier for well-timed bets.

The result, ethics experts say, is a jarring juxtaposition between Mr. Trump’s public comments and his family’s private business."

Soldier who made $400K betting on Maduro's removal makes 1st court appearance; ABC News, April 24, 2026

Peter Charalambous , ABC News; Soldier who made $400K betting on Maduro's removal makes 1st court appearance

"The special operations soldier who was indicted this week for allegedly using classified information to make more than $400,000 betting on the capture of Nicolas Maduro appeared in a federal courtroom in Raleigh, North Carolina, Friday. 

Master Sgt. Gannon Ken Van Dyke, who made the wager on the prediction market Polymarket, will be released on a $250,000 appearance bond...

Federal investigators said Van Dyke bet more than $33,000 on Polymarket just days before President Donald Trump announced Maduro's capture

The series of bets -- which netted more than $409,000 -- immediately prompted scrutiny within the world of prediction markets and resulted in a monthslong investigation about whether inside information was used to place the bets. 

Van Dyke was indicted on charges that included unlawful use of confidential information for personal gain, theft of nonpublic government information, commodities fraud, and wire fraud.

When, after placing the bets, he saw reports about unusual trading associated with the mission, Van Dyke allegedly tried to hide the evidence of the trades by attempting to delete his Polymarket account and change the email address registered to his cryptocurrency exchange account, according to the indictment. 

"Rather than safeguard that information as he was obligated to do, VAN DYKE decided to use that classified information to place trades on a prediction market platform for his personal profit," the indictment said. "VAN DYKE subsequently tried to conceal his unlawful use of classified U.S. Government information by attempting to obscure the source of his unlawful proceeds and to disguise his connection to the accounts linked to the illicit trades.""

OpenAI's Sam Altman writes apology to community of Tumbler Ridge; CBC News, April 24, 2026

Andrew Kurjata , CBC News; OpenAI's Sam Altman writes apology to community of Tumbler Ridge

"Sam Altman, the CEO of OpenAI, has written a letter of apology to the community of Tumbler Ridge for failing to alert RCMP about the account of the Tumbler Ridge shooter.

The company shared the letter with the local news website Tumbler RidgeLines, which published it in full. Its authenticity was confirmed by a spokesperson for OpenAI...

Altman committed to authoring an apology after meeting with B.C. Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka at the beginning of March, but said he wanted to take some time before doing so in order to give the community the opportunity to "grieve in their own time."

He also acknowledged that his company should have alerted law enforcement about the account of the shooter, which was flagged for problematic activity in advance of the tragedy but was not escalated to alerting authorities in Canada...

Altman's company is being sued by one Tumbler Ridge family, who alleges the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

Apology 'necessary' but 'grossly insufficient': Eby

Eby also shared the letter on social media, writing "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."

A statement from the District of Tumbler Ridge released Friday afternoon acknowledged that Altman's letter "may evoke a range of emotions, and we encourage everyone to take the time and space they need.""

House lawmakers clamoring for ethics reforms after wave of resignations; The Hill, April 23, 2026

 MIKE LILLIS, The Hill; House lawmakers clamoring for ethics reforms after wave of resignations

"The surge of House resignations this month has triggered calls from both parties for a broader overhaul of the ethics process and how the chamber polices its own. 

While many lawmakers have welcomed the hasty departures of former Reps. Eric Swalwell (D-Calif.), Tony Gonzales (R-Texas) and Sheila Cherfilus-McCormick (D-Fla.), their cases have also stirred up plenty of frustrations about Congress’s internal handling of allegations of misconduct and the pace of the Ethics Committee’s subsequent investigations.

Those frustrations are now morphing into specific calls to revamp the ethics process, with leaders in both parties joining the growing chorus of lawmakers eyeing ways to improve the chamber’s oversight machinery, particularly when it comes to empowering women to report allegations of sexual misconduct."

Artificial Intelligence and Copyright- Where Does the UK Stand?; The National Law Review, April 23, 2026

 Serena TotinoSimon CasinaderK&L Gates LLP , The National Law Review; Artificial Intelligence and Copyright- Where Does the UK Stand?

"The UK Government’s report on the copyright and AI consultation was recently published. While the report confirms that balancing the interests of copyrights holders and AI developers is a complex exercise, it also provides an indication of likely scenarios to consider in this fast-evolving environment.

The consultation focused on whether AI developers should be permitted to use copyright protected works for training purposes without prior authorisation and, if so, under what conditions...

Takeaways

Rights holders should continue to assess how their content is accessed and used, consider technical or contractual mechanisms for licensing and rights reservation.

AI developers should remain cautious when sourcing training data, ensure governance and record keeping processes are robust, and factor copyright risk into product development and deployment strategies."

Friday, April 24, 2026

White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says; The New York Times, April 24, 2026

, The New York Times; White House Allowed Officials’ Text Messages to Be Deleted, Lawsuit Says

Two watchdogs say internal White House guidance that text messages need not be preserved unless “they are the sole record of official decision-making” contradicted the law.

"Two government watchdogs sued President Trump and the White House on Friday over internal guidance that instructed that some text messages exchanged between officials could be deleted, despite a law generally mandating the preservation of presidential records.

The watchdogs, Citizens for Responsibility and Ethics in Washington and the Freedom of the Press Foundation, also asked a federal judge to overrule a separate but related Justice Department memo, which declared unconstitutional a longstanding federal law requiring safeguarding of presidents’ records, including text messages. The White House guidance cited the memo.

Their lawsuit comes amid a torrent of accusations that the Trump administration has disregarded record-keeping and document disclosure required by law, even as the president and his officials have sought to transform the government and push the legal bounds of their power. They have displayed a particular willingness to skirt record-keeping requirements on text messages exchanged among top officials.

In their complaint, the two watchdogs said the “deficient instructions” from the White House would “result in the irreparable loss or destruction” of presidential records."

Sam Altman Wants to Know Whether You’re Human; The Atlantic, April 24, 2026

Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

Lawyers raise ethical concerns over Pam Bondi’s conduct as Attorney General; WMNF, April 23, 2026

DARIA MIRONOVA , WMNF; Lawyers raise ethical concerns over Pam Bondi’s conduct as Attorney General

"Lawyers are raising serious concerns about what they allege is unethical conduct by Pam Bondi, former Florida Attorney General and until her recent dismissal, U.S. Attorney General, in a formal complaint to the Florida Bar and amid growing scrutiny within the legal community of alumni of her alma mater, Stetson Law School in Gulfport, Florida...

The outcome will not only address Bondi’s actions but also test whether the legal system will hold influential attorneys to the same standards as everyone else."

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.; The New York Times, April 24, 2026

 Meaghan Tobin and , The New York Times; DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.

"DeepSeek released its models as open source, which means others can freely use and modify them. By contrast, OpenAI and Anthropic kept their leading models proprietary. The episode demonstrated that an open-source system could perform almost as well as closed versions. In the months that followed, Chinese firms released dozens of other open-source models. By the end of 2025, these models made up a significant share of global A.I. usage.

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.

The implications are meaningful. Using A.I. to write code is faster and frees up human programmers to focus on bigger issues. It also means people can use DeepSeek’s latest release to power A.I. agents, which are personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services."

Ombudsman column: The Pentagon is trying to silence me; Stars and Stripes, April 23, 2026

 Jacqueline Smith, Stars and Stripes; Ombudsman column: The Pentagon is trying to silence me

"A recent opinion column I wrote as the Stars and Stripes ombudsman began with this: “Pete Hegseth doesn’t want you to see cartoons in this newspaper anymore.”

Apparently the Pentagon also doesn’t want you to hear from me anymore about threats to the editorial independence of Stars and Stripes.

They fired me.

This happened in the coldest way possible: DA Form 3434 stated that my last day as ombudsman for Stars and Stripes is April 28. (They have to give five days’ notice.) No reason is given. But: “This action is not grievable.”

No one should be surprised that they’re kicking out the one person charged by Congress with protecting Stars and Stripes’ editorial independence. For nearly a year, Pentagon leadership has placed more and more restrictions on the mainstream media...

I was immensely honored to be chosen as the 13th, and first female, ombudsman for Stars and Stripes. I’ve come to appreciate the many talented and dedicated journalists and staff at Stripes — it’s more than a job for them wherever they are stationed around the world. I’ve been fortunate to meet or hear from innumerable veterans, officers and enlisted personnel and military spouses. I’ve even respected the colonels who I tangled with over the rights of Stripes reporters to cover public gatherings on bases.

What you can worry about is the future of Stars and Stripes. This newspaper has a long history of commitment to the military community and to journalistic values. Please don’t let it be controlled by Pentagon brass."

Pentagon Fires Stars and Stripes’ Advocate for Independence; The New York Times, April 23, 2026

, The New York Times ; Pentagon Fires Stars and Stripes’ Advocate for Independence

"In a blow to independent coverage of the military, the Pentagon has fired the ombudsman for Stars and Stripes, a newspaper that covers the U.S. armed forces and is partly funded by the Defense Department.

“Apparently the Pentagon also doesn’t want you to hear from me anymore about threats to the editorial independence of Stars and Stripes,” the ombudsman, Jacqueline Smith, wrote in a Stars and Stripes column published on Thursday. She said that the Defense Department had given no reason for her dismissal and that she had been told it was “not grievable.”

Her role as ombudsman, which she began in December 2023, was to serve as a watchdog monitoring the paper’s independence and to report concerns to Congress.

“Jacqueline Smith has been relieved of her duties as Stars and Stripes ombudsman effective immediately,” the Defense Department said in a statement."

ABA Law Day events to focus on the ‘The Rule of Law and the American Dream’; ABA Journal, April 21, 2026

ABA Journal; ABA Law Day events to focus on the ‘The Rule of Law and the American Dream’

"The American Bar Association will host various events to mark Law Day 2026 that address the theme, “The Rule of Law and the American Dream.”

May 1 is designated as the official Law Day."

Thursday, April 23, 2026

U.S. accuses China of "industrial-scale" campaigns to steal AI secrets; Axios, April 23, 2026

 Sam Sabin, Axios ; U.S. accuses China of "industrial-scale" campaigns to steal AI secrets

"The Trump administration on Thursday accused China-backed actors of running "deliberate, industrial-scale campaigns" to distill and copy American frontier AI models...

Driving the news: Michael Kratsios, director of the White House Office of Science and Technology Policy, sent a memo Thursday to federal agency heads accusing mostly China-based actors of using proxy accounts to evade detection and jailbreak models to "expose proprietary information" and "extract capabilities from American AI models."

Distillation attacks involve querying proprietary models, like Claude or Gemini, millions of times via APIs to build datasets that replicate how the systems behave.

Kratsios said these campaigns enable foreign actors to release models that appear to match U.S. AI capabilities at a fraction of the cost.

He added that such tactics can also strip away guardrails meant to keep outputs "ideologically neutral and truth-seeking.""

Exclusive: US EEOC Chair violated ethics rules halting LGBTQ cases, complaint alleges; Reuters, April 23, 2026

 and  , Reuters; Exclusive: US EEOC Chair violated ethics rules halting LGBTQ cases, complaint alleges

"U.S. Equal Employment Opportunity Commission Chair Andrea Lucas allegedly violated professional conduct rules of the ​Virginia State Bar by refusing to enforce key provisions of federal civil rights laws, according to a complaint submitted to the bar Thursday, shared ‌with Reuters.

The Legal Accountability Center, which focuses on filing complaints against individuals and institutions alleged to have violated professional conduct standards, asked the state agency to investigate whether Lucas breached her ethical obligations by directing EEOC investigators to stop processing certain categories of discrimination claims and by sending unauthorized information demands to 20 major law firms."

St. Louis Cardinals fighting Hamilton Cardinals attempt to trademark baseball team name and design in Canada; CBC News, April 22, 2026

 Justin Chandler , CBC News; St. Louis Cardinals fighting Hamilton Cardinals attempt to trademark baseball team name and design in Canada

"The Hamilton Cardinals baseball team is facing off against the St. Louis Cardinals, but not in a ballpark.

The MLB team from St. Louis, Mo., is opposing a trademark application the Canadian Baseball League (CBL) team filed before the Canadian Intellectual Property Office in 2023.

Hamilton team owner Eric Spearin described the MLB team's opposition as “just a big shock.” The teams play in different leagues, he said, and "our logo looks nothing like theirs.""

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Penalties stack up as AI spreads through the legal system; NPR, April 3, 2026

 , NPR; Penalties stack up as AI spreads through the legal system

""Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at the business school HEC Paris who keeps a worldwide tally of instances of courts sanctioning people for using erroneous information generated by AI...

The numbers started taking off last year, and Charlotin says the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.

Penalties are also on the rise, he says. A federal court may have set a record last month with an order for a lawyer in Oregon to pay $109,700 in sanctions and costs for filing AI-generated errors.

The professional embarrassments even take place at the level of state supreme courts...

"I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. She's designing special training in AI ethics for students who are interested. But she also says the ethical rules aren't completely settled...

When lawyers get in trouble for using AI, it's because they've violated the long-standing rule that holds them responsible for the accuracy of their filings, regardless of how they were generated."

Meta will cut 10% of workforce as company pushes deeper into AI; CNBC, April 23, 2026

 Jonathan Vanian, CNBC; Meta will cut 10% of workforce as company pushes deeper into AI

"Meta plans to lay off 10% of its workforce, equaling about 8,000 jobs, as it continues ramping up investments in artificial intelligence.

The cuts will begin on May 20, and the company is scrapping plans to hire people for 6,000 open roles, according to a Thursday memo to employees. Bloomberg was first to report on the layoffs. 

Meta’s latest round of cuts follows several smaller job reductions that the company said was necessary to to improve efficiency while focusing its efforts on generative AI, where it’s lagged OpenAI, Google and Anthropic."

Anthropic seeks pivotal court win in music publisher lawsuit over AI training; Reuters, April 21, 2026

 , Reuters; Anthropic seeks pivotal court win in music publisher lawsuit over AI training

"Artificial intelligence company Anthropic has asked a California federal court to ​rule in its favor in a copyright lawsuit brought by music publishers Universal Music Group, Concord ‌and ABKCO, arguing it made "fair use" of their song lyrics to train its AI-powered chatbot Claude.

Anthropic's Monday filing addresses the key question for a wave of high-stakes copyright cases brought by creators against tech companies: is it legally permissible to copy millions of copyrighted works ​without permission to train AI models?...

The lawsuit ​is one of dozens of disputes between copyright owners such as authors and news outlets, and tech giants ​including OpenAI, Microsoft and Meta Platforms over the training of their AI systems. Amazon- and Google-backed Anthropic was the first major AI ‌company ⁠to settle one of the cases, agreeing last yearto pay a group of authors $1.5 billion to resolve a class-action lawsuit."

Got an Old Kindle? It Might Not Work Anymore. Here’s What to Do.; The New York Times, April 9, 2026

 Brenda Stolyar, The New York Times; Got an Old Kindle? It Might Not Work Anymore. Here’s What to Do. 

"Earlier this week, Amazon notified its customers via email that, starting May 20, it will end support for Kindle and Kindle Fire devices released in 2012 or earlier. That means you’ll no longer be able to download new content to your e-reader via Amazon’s Kindle Store.

Although you don’t have to stop using your old Kindle immediately, the restricted functionality may force you to consider whether you want to upgrade to a newer version or ditch the Amazon ecosystem altogether.

If you own a Kindle that’s no longer supported, Amazon wants you to buy a new one. The company is offering a 20% discount that you can apply toward one of its new Kindle models, along with a $20 e-book credit that will automatically be applied to your account with the purchase of a new device. The promotion will be valid through June 20, exactly a month after the company ends support for its older models.

Here’s what you need to know about Amazon’s decision to sunset its older e-readers and tablets, and what that means for you."

AI use surges among policymakers; Axios, April 23, 2026

Eleanor Hawkins, Axios; AI use surges among policymakers

"AI is no longer just a research tool in Washington, D.C. — it's starting to shape how policymakers form opinions, according to Penta Group data shared exclusively with Axios. 

Why it matters: Policymakers are the latest to lean on AI for guidance, signaling its growing role in shaping decisions across markets, consumer behavior and now public policy.

By the numbers: Penta Group surveyed 2,060 U.S. federal policymakers and senior staff across Congress, the administration and federal agencies, and found that 27% say AI informs their perspective on a topic — up from 17% in 2025 — putting AI on par with traditional sources like experts and web searches...

The intrigue: Republican policymakers are about 1.2 times more likely than Democratic policymakers to use AI daily  69% compared to 57%.

Republicans are also more likely to find AI helpful in shaping their perspectives (30% vs. 23% for Democrats).

Meanwhile, Democrats are more than twice as likely to avoid AI entirely. 13% say they don't use it in their daily work, compared with 5% of Republicans."