Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, March 13, 2026

Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming; The National Law Review, March 12, 2026

Nichole HaydenZahra AsadiNelson Mullins  Idea Exchange - Insights, The National Law Review; Leveling Up or Losing Rights? Copyright Challenges of AI-Generated Content in Gaming

"Artificial intelligence is quickly becoming part of the regulated gaming ecosystem. From electronic slot machines and casino games to online sportsbooks and betting platforms, AI is now used to assist with everything from game themes and visual design to user interfaces and marketing content. While these tools promise efficiency and faster development cycles, they also raise an important legal question for gaming companies: when AI is involved in creating game content, who actually owns the result?"

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Wednesday, March 11, 2026

Introducing The Anthropic Institute; Anthropic, March 11, 2026

Anthropic; Introducing The Anthropic Institute

"We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies. The Anthropic Institute will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.

In the five years since Anthropic began, AI progress has moved incredibly quickly. It took us two years to release our first commercial model, and just three more to develop models that can discover severe cybersecurity vulnerabilitiestake on a wide range of real work, and even begin to accelerate the pace of AI development itself.

We predict that far more dramatic progress will follow in the next two years. One of our company’s core convictions is that AI development is accelerating: that the improvements we make are compounding over time. Because of this, extremely powerful AI, like the kind our CEO Dario Amodei describes in Machines of Loving Grace, is coming far sooner than many think.

If this is right, society is shortly going to need to confront many massive challenges. How will powerful AI systems reshape our jobs and economies? What kinds of opportunities for greater societal resilience will they give us? What kinds of threats will they magnify or introduce? What are the expressed “values” of AI systems and how will society help companies determine what the appropriate values are? And, if the recursive self-improvement of AI systems does begin to occur, who in the world should be made aware, and how should these systems be governed?

The Anthropic Institute’s goal is to tell the world what we’re learning about these challenges as we build frontier AI systems, and to partner with external audiences to help address the risks we must confront. Whether our societies are able to do so will determine whether or not transformative AI delivers the radical upsides that we believe are possible in science, economic development, and human agency.

The Institute is led by our co-founder Jack Clark, who will assume a new role as Anthropic’s Head of Public Benefit. It has an interdisciplinary staff of machine learning engineers, economists, and social scientists, bringing together and expanding three of Anthropic’s research teams: the Frontier Red Team, which stress-tests AI systems to understand the outermost limits of their current capabilities; Societal Impacts, which studies how AI is being used in the real world; and Economic Research, which tracks its impact on jobs and the larger economy. The Institute will also incubate new teams, and is currently working on efforts around forecasting AI progress and better understanding how powerful AI will interact with the legal system.

The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making. At the same time, the Institute is a two-way street. It will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond. What we learn will inform what the Institute studies, and how our company as a whole chooses to act.

The Anthropic Institute has made several founding hires:

  • Matt Botvinick, a Resident Fellow at Yale Law School and previously Senior Director of Research at Google DeepMind and Professor in Neural Computation at Princeton, is joining the Institute to lead its work on AI and the rule of law.
  • Anton Korinek is joining the Economic Research team, on leave from his role as Professor of Economics at the University of Virginia, to lead an effort studying how transformative AI could reshape the very nature of economic activity.
  • Zoë Hitzig, who previously studied AI’s social and economic impacts at OpenAI, is joining to connect our economics work to model training and development."

Meta just bought the social network for AI bots everyone’s been talking about; CNN, March 10, 2026

 Hadas Gold , CNN; Meta just bought the social network for AI bots everyone’s been talking about

"Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.

Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.

Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.

Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically."

‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI; Billboard, March 11, 2026

Lars Brandle , Billboard; ‘AN IMPORTANT STEP’: EUROPEAN PARLIAMENT ADOPTS REPORT ON COPYRIGHT AND GENERATIVE AI

"Two years after the European Parliament passed the Artificial Intelligence Act, MEPs this week finally adopted a report on copyright and generative AI.

On Tuesday, March 10, Parliament passed its resolution on “Copyright and generative artificial intelligence – opportunities and challenges” with an overwhelming majority of 460 votes to 71, and with 88 abstentions.

The report calls for the EU and its 27 member states to focus on the crucial issues of how AI and tech companies engage with copyright-protected music in the digital age, and explores a licensing system as a solution, paving the way for fair compensation for the use of creative works."

Tuesday, March 10, 2026

Vatican theological commission warns of replacing God with 'a world governed by machines'; National Catholic Reporter, March 5, 2026

COURTNEY MARES, National Catholic Reporter; Vatican theological commission warns of replacing God with 'a world governed by machines'

"The Vatican's International Theological Commission has warned that if humanity places total trust in technology in a "world ruled by machines," it risks replacing the "living God" with a counterfeit "virtual God."

The assessment came in a sweeping new document, published on March 4, examining how artificial intelligence, transhumanism and other technological developments can pose profound risks to human identity and dignity. The document seeks to propose a response rooted in Christian anthropology and the Gospel.

The 48-page document, titled, "Quo vadis, humanitas? Thinking about Christian anthropology in light of some scenarios for the future of humanity," was published in Italian and Spanish after being approved by Pope Leo XIV. Its Latin title — meaning "Where are you going, humanity?" — echoes the question tradition holds was put to St. Peter before his crucifixion in Rome.

"At this juncture in the 21st century, the human family is faced with questions so radical that they threaten its very existence as we have known it," the document says.

"The eruption of scientific and technical development unprecedented in the history of the planet must be accompanied by a corresponding growth in responsibility that directs progress toward the good of human beings, because they are today exposed to risks never imagined before."

The document, written by a subcommission that met between 2022 and 2025 and approved unanimously at the ITC's 2025 plenary session, was written to mark the 60th anniversary of Gaudium et Spes, the Second Vatican Council's landmark Pastoral Constitution on the Church in the Modern World."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

 Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Thursday, March 5, 2026

A Long-Running AI Copyright Question Gets an Answer as Supreme Court Stays Mum; CNET, March 4, 2026

 Omar Gallaga, CNET ; A Long-Running AI Copyright Question Gets an Answer as Supreme Court Stays Mum

The man behind the AI-generated image in question reflects on what he calls a "philosophical milestone."

"A legal battle over AI copyright that has gone on for more than a decade may have reached its end, with the US Supreme Court declining to hear a case involving AI-generated visual art...

In an email to CNET, Thaler said that although the court declined to hear his appeal, "I see this moment as a philosophical milestone rather than a defeat."

While he's unsure if legal action will continue, Thaler says he's still certain that the law on copyright, as written, is intended to exclude nonhuman inventors.

"By bringing DABUS into the legal system, I confronted a question long confined to theory: whether invention and creativity must remain tied to humans or whether autonomous computational processes could genuinely originate ideas," Thaler said."

Tuesday, March 3, 2026

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’

"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

US Supreme Court declines to hear dispute over copyrights for AI-generated material; Reuters, March 2, 2026

 , Reuters; US Supreme Court declines to hear dispute over copyrights for AI-generated material

"The U.S. Supreme Court declined on Monday to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.

Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator."

Monday, March 2, 2026

Everybody’s Talking About AI: Takeaways from the February 20, 2026 Fordham Law Symposium; Lexology, February 26, 2026

 Seyfarth Shaw LLP - Owen Wolfe, Lexology; Everybody’s Talking About AI: Takeaways from the February 20, 2026 Fordham Law Symposium

"On February 20, 2026, Gadgets, Gigabytes and Goodwill Blog co-editor Owen Wolfe spoke at the Fordham School of Law as part of the Fordham Intellectual Property, Media & Entertainment Law Journal Symposium, The Meaning of Ownership: Rethinking Intellectual Property, Creativity, and Control in the Age of Innovation. Owen discussed how courts have so far applied the “fair use” doctrine to cases involving generative AI, distinguishing between use of copyrighted materials in gen AI training and gen AI outputs that are alleged to be substantially similar to the original works. He noted that the decisions to date have been mixed, with some courts finding that certain uses of copyrighted works for AI training are fair use, and other courts expressing skepticism about whether that is the correct result. Owen also surveyed arguments both for and against a finding of fair use, giving the audience food for thought about what courts might decide in the future and whether we might see an amendment to the Copyright Act down the road.

Owen’s talk followed one by Dr. Douglas Lind, a professor at Virginia Tech, who surveyed the history of copyright law in the United States. He focused on the law’s treatment of phonograph records and sound recordings when those new technologies first emerged. Dr. Lind noted that copyright law evolved, and the Copyright Act was eventually amended, to address those new technologies. Dr. Lind raised the question of whether the Copyright Act should be amended again to address gen AI."

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

 , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

An Ohio newspaper has a new star writer. It isn’t human.; The Washington Post, March 1, 2026

, The Washington Post; An Ohio newspaper has a new star writer. It isn’t human.

At the 184-year-old Cleveland Plain Dealer, a top editor’s push to let AI draft news articles is boosting traffic — and spooking staffers.


"The Plain Dealer, Cleveland’s largest newspaper, has begun to feature a new byline. On recent articles about an ice carving festival, a medical research discovery and a roaming pack of chicken-slaying dogs, a reporter’s name is paired with the words “Advance Local Express Desk.” It means: This article was drafted by artificial intelligence."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

, The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

Friday, February 20, 2026

The battle over Scott Adams' AI afterlife; Business Insider, February 20, 2026

Katherine Tangalakis-Lippert, Business Insider; The battle over Scott Adams' AI afterlife

 "In a 2021 podcast clip, the cartoonist said he granted "explicit permission" for anyone to make a posthumous AI based on him, arguing that his public thoughts and words are "so pervasive on the internet" that he'd be "a good candidate to turn into AI." He added that he was OK with an AI version of him saying new things after he died, as long as they seemed compatible with what he might say while alive.

Shortly after the 68-year-old's January death from complications of metastatic prostate cancer, an AI-generated "Scott Adams" account began posting videos of a digital version of the cartoonist speaking directly to viewers about current events and philosophy, mirroring the cadence and topics the actual human Adams discussed for years.

His family says it's a violation, not a tribute."

Judge skeptical over remorse of defendant who used AI to write apology letters; ABA Journal, February 18, 2026

 AMANDA ROBERT, ABA Journal; Judge skeptical over remorse of defendant who used AI to write apology letters

"A judge in New Zealand questioned whether a defendant in an arson case is truly remorseful after discovering that she used artificial intelligence to draft apology letters."

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."