Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, April 2, 2026

AI gaps in the boardroom are becoming a reputational risk; Axios, April 2, 2026

Eleanor Hawkins, Axios; AI gaps in the boardroom are becoming a reputational risk

"The big picture: Companies across every industry are being forced into rapid AI-driven transformation, but many corporate boards lack the expertise to guide strategy, manage risk or communicate decisions credibly to stakeholders.

By the numbers: Only 39% of Fortune 100 boards have any form of AI oversight, such as committees, a director with AI expertise, or an ethics board, according to McKinsey research.


Another recent report found that only 13% of S&P 500 companies have at least one director with AI-related expertise.


Similarly, McKinsey's survey of directors found that 66% say their boards have "limited to no knowledge or experience" with AI, and nearly one in three say AI does not even appear on their agendas.


And a report from the National Association of Corporate Directors (NACD) found that only 17% have established an AI education plan for directors, and 6% have a dedicated committee to oversee AI.


Between the lines: Having an AI-savvy board is a major competitive advantage, according to a recent MIT study."

Wednesday, April 1, 2026

USPTO announces agentic AI-assisted evaluator for patent eligibility determinations; United States Patent and Trademark Office (USPTO), April 1, 2026

 United States Patent and Trademark Office (USPTO) ; USPTO announces agentic AI-assisted evaluator for patent eligibility determinations

"As part of the U.S. Patent and Trademark Office's (USPTO) continued efforts to incorporate artificial intelligence (AI) into agency operations—first with the Artificial Intelligence Search Automated Pilot Program, or “ASAP!,” for patent prior art references followed by the Trademark Classification Agentic Codification Tool, or “Class ACT,” for trademark searching—the USPTO today announced the first-of-its-kind agentic AI tool to assist in patent eligibility determinations under 35 U.S.C. §101. 

America’s Innovation Agency’s new AI system, termed “McConaughey Agentic Tasking Technology Helping Examiner Workload,” or “MATTHEW,” for short, will help examiners tackle the thorniest of eligibility questions as to whether claims presented are an abstract idea or a patent-eligible invention. “MATTHEW will greatly enhance our ability to make the close calls—or any call, really—as I herewith also suspend all applicable precedent, including Desjardins, Alice, and Mayo,” said USPTO Director John A. Squires. “Basically, in terms of eligibility, if MATTHEW says your invention is ‘Alright, Alright, Alright,’ then it’s ‘Alright, Alright, Alright’ with the USPTO.” 

“Initially, we had some concerns that we would be introducing a three-part test in place of the two-part test under Alice and Mayo, but I think we’ll be al…um, okay,” he continued.

“We want to equip our examiners—the best in the world at what they do—with the best tools to assist them,” said Director Squires. “In fact, MATTHEW was selected after careful evaluation of best-in-breed offerings, including the ‘Binary Eligibility Engaged Translation Language Environment Joint User Interface Computational Evaluator,’ or ‘BEETLEJUICE,’” he stated. “But the coders had some issues in testing when they said the name three times. I hope they’ll be al…um, okay,” remarked the Director. 

When asked if the USPTO licensed its tool in light of famed actor McConaughey’s recent Name Image and Likeness (NIL) ‘non-traditional’ registrations, Director Squires retorted, “Well, he’s the one who said, ‘trademark yourself!’—I think the Founders would have wanted this.” When asked if he had heard from Mr. McConaughey’s lawyers, Director Squires produced an unintelligible, guttural chanting sound and began rhythmically beating his chest with his fist.

For more information on this trailblazing AI system, please visit the USPTO website."

Tuesday, March 31, 2026

Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm; Reuters, March 16, 2026

  and  , Reuters; Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm

"In 2025, U.S. courts issued the first substantive, merits-stage decisions addressing whether the use of copyrighted works to train generative artificial intelligence systems constitutes "fair use." Although these rulings do not settle all open questions — and in some respects highlight emerging judicial disagreements — they represent a significant inflection point in copyright law's response to large language models, image generators, and other foundation models.

Taken together, these cases establish early guideposts for AI developers, publishers, media companies, and enterprises deploying generative AI systems. Below, we summarize the most important copyright ​decisions and pending cases shaping the law in 2025...

Conclusion and recommendations

The ​2025 decisions reflect cautious but meaningful progress in defining how copyright law applies to generative AI. Courts are increasingly receptive to fair use arguments for training on lawfully acquired data, deeply skeptical of speculative market-harm claims, and uniformly intolerant of piracy. At the same time, cases involving direct competition, news content, and human likeness may test the limits of these early rulings."

Monday, March 30, 2026

Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say; Axios, March 27, 2026

 Julie Bowen, Axios ; Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say

"Washington, D.C. — As policymakers grapple with how to regulate AI, the hardest questions around copyright and fair use are being punted to the courts, according to governance, creator, and technology experts at an Axios expert voices roundtable.

The big picture: With Congress moving slowly and disagreements over policy, judges are becoming the primary deciders of how AI and the creators work together — or don't.


That's partly by necessity: "Fair use is incredibly complicated — case by case, fact specific," News/Media Alliance president and CEO Danielle Coffey said.


"Each case that we get … we start to get these new guideposts," Jones Walker partner Graham Ryan said.


Ryan said they expect at least three fair use decisions this year that will have implications for the broader AI-artist ecosystem.


Axios' Maria Curi and Ashley Gold moderated the March 25 discussion, which was sponsored by Adobe.

What they're saying: Legal uncertainty remains. For example, two courts within the same district, and during the same week, differed in the reasoning behind their rulings on similar matters of fair use and AI.


"There is a current, live controversy over … the extant understanding of the fourth factor in fair use, which is: Does the copy replace the market for the work?" said Kevin Bankston, senior adviser for the Center for Democracy & Technology.


Still, "we have been trying to support the process through the courts, because we think there is a really strong framework in copyright law for protecting artists right now," according to Public Knowledge president and CEO Chris Lewis."

Sunday, March 29, 2026

New Political Group to Push Trump’s A.I. Agenda in Midterms; The New York Times, March 29, 2026

 , The New York Times; New Political Group to Push Trump’s A.I. Agenda in Midterms 

"A new political operation with strong ties to the Trump administration is preparing to spend big money to boost President Trump’s record on artificial intelligence.

The group, called Innovation Council Action, said on Sunday that it would spend at least $100 million this year on its activities. That will include a major advocacy push behind new A.I. policy guidelines unveiled by the White House this month that seek to block state laws regulating A.I. The group is organized as a nonprofit, but is likely to start a super PAC as part of that $100 million push. That structure would allow Innovation Council to help backers and attack opponents of Mr. Trump’s A.I. agenda...

Innovation Council, by contrast, is explicitly aligned with the Trump operation. It is led by Taylor Budowich, a longtime Trump political adviser who served as White House deputy chief of staff, and has the blessing of David Sacks, a White House official."

Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Friday, March 27, 2026

Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t; People, March 26, 2026

Karla Marie Sanford

, People ; Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t

“They call us old stupid farmers, you know, but we’re not,” said Ida Huddleston, 82

"A Kentucky mother and daughter are continuing to open up about their decision to keep their farmland rather than accept a multi-million payout that could pave the way for a data center, which may still be happening anyway.

“My grandfather and great-grandfather and a whole bunch of family have all lived here for years, paid taxes on it, fed a nation off of it,” Delsia Bare told CBS affiliate WKRC. “Even raised wheat through the Depression and kept bread lines up in the United States of America when people didn’t have anything else.”

Bare and her 82-year-old mom Ida Huddleston own hundreds of acres of farmland outside Maysville, according to WKRC. Together, the two have rejected over $26 million to sell part of the farmland to an undisclosed Fortune 100 company."

Thursday, March 26, 2026

America's Newspapers emphasizes importance of protecting publishers’ intellectual property; Editor & Publisher, March 25, 2026

 Staff | America's Newspapers , Editor & Publisher; America's Newspapers emphasizes importance of protecting publishers’ intellectual property

"America’s Newspapers has issued the following statement in response to the comprehensive national legislative framework on artificial intelligence released by the Trump administration...

Specifically, the framework affirms that the creative works and unique identities of American innovators, creators and publishers must be respected in the age of AI. At the same time, it recognizes that artificial intelligence systems require access to information to learn and improve, and proposes a balanced approach that both enables innovation and safeguards the rights of content creators.

“America’s Newspapers strongly supports the administration’s recognition that high-quality journalism and original content are essential to the continued strength of our democracy and economy,” said Matt McMillan, chair of America’s Newspapers and CEO of Press Publications."

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

We're All Copyright Owners. Welcome to the Mess That AI Has Created; CNET, March 23, 2026

 


Katelyn Chedraoui , CNET; We're All Copyright Owners. Welcome to the Mess That AI Has Created

Copyright is one of the most important legal issues in the age of AI. And yes, it affects you.

"You probably rarely, if ever, think about copyright law. But if you want to understand why there are so many lawsuits being filed against AI companies, knowing a bit about copyright law is key. And whether you know it or not, these issues affect you.

If you've ever written a blog post, taken a photo or created an original video, you're a copyright owner. That's most of us, which means that copyright law -- its protections, limitations and application -- is more relevant to you than you might've thought. Sadly, copyright in the age of generative AI is something of a mess."

Tuesday, March 24, 2026

Fostering ethical use of AI in K-12 education; Iowa Public Radio, March 20, 2026

 , Iowa Public Radio; Fostering ethical use of AI in K-12 education

"The use of artificial intelligence in school has become more common since the launch of ChatGPT in late 2022. Today, a majority of U.S. teens say they use AI chatbots for school work, according to the Pew Research Center. 

On this episode of River to River, two Iowa-based educators who are working together in advancing ethical and human-centered approaches to artificial intelligence across K-12 education share their experiences. Iowa State University professor Evrim Baran is the project director of the Critical AI in Education Pathways Initiative, which launched a micro-credential course this month for educators. Chad Sussex founded the Winterset Community School District's AI task force, and has recently expanded into consulting for other school districts around the state.

Then we talk with Rebecca Winthrop, who coauthored a recent report that shares of the potential negative risks that generative AI poses to students, and what can be done to prevent them while maximizing the potential benefits of AI.

Guests:

  • Evrim Baran, ISU professor of educational technology and human-computer interaction and Helen LeBaron Hilton Chair, College of Health and Human Sciences
  • Chad Sussex, grades 7-12 assistant principal and AI task force leader, Winterset Community School District
  • Rebecca Winthrop, senior fellow and director of the Center for Universal Education, Brookings Institution"

Monday, March 16, 2026

How Trump Drove a Wedge Between Florida Republicans Over A.I.; The New York Times, March 16, 2026

David McCabe and  , The New York Times; How Trump Drove a Wedge Between Florida Republicans Over A.I.

A Florida bill that would have regulated artificial intelligence, backed by Gov. Ron DeSantis, failed to gain traction after President Trump made it clear he did not want states to rein in the technology.

"Florida lawmakers failed to pass a sweeping bill aimed at reining in the power of artificial intelligence by the time their annual legislative session wrapped up Friday.

The legislation, known as an A.I. Bill of Rights, flopped even though Gov. Ron DeSantis, a Republican, had spent months championing it. The bill would have forced companies to disclose when they use A.I. chatbots to interact with consumers and forbidden the technology’s use in licensed mental health counseling, among other measures.

But Republicans in the Florida House of Representatives refused to take up the bill because of President Trump. Mr. Trump has visibly positioned himself as pro-A.I., signing executive orders to protect the tech industry and threatening states that try to regulate the technology. In recent weeks, the White House has communicated to state legislators around the country that it is wary of states regulating A.I., while Mr. Trump has reiterated his support for the technology in public."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Music Copyright in the Gen AI Age: Where Are We Now?; Brooklyn Sports & Entertainment Law Blog, February 11, 2026

 Sam Woods , Brooklyn Sports & Entertainment Law Blog; Music Copyright in the Gen AI Age: Where Are We Now?

"Imagine you are a musician who has dedicated years of your life creating an album or EP — tinkering with the production, revising lyrics, finding the perfect samples— and now, you have finally shared your art with the world and are thrilled with the project’s success. However, while scrolling on TikTok a few months later, you hear some familiar audio. Wait a minute, is that one of your songs? No… not quite, but why does it sound so similar? Turns out, the song was created using artificial intelligence (“AI”)."

AI is dressing up greed as progress on creative rights; Financial Times, March 14, 2026

 , Financial Times; AI is dressing up greed as progress on creative rights

"At this week’s London Book Fair, a lot of people were walking around with one particular title wedged under their arms. Called Don’t Steal This Book, its pages are empty apart from the names of thousands of authors, including Kazuo Ishiguro and Richard Osman. It’s a chilling protest against the rampant theft of creative work by tech firms, which could leave future artists unable to earn a living."

Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

 David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

  , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Why I’m Suing Grammarly; The New York Times, March 13, 2026

, The New York Times ; Why I’m Suing Grammarly

"Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service...

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone...

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their bodies, facial features and voices to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days...

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped."