Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, December 23, 2025

Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad'); Esquire, December 23, 2025

 , Esquire; Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad')

"How many times have you been asked whether the show is about AI?

I’ve been asked a fair bit about AI. It’s interesting because I came up with this story going on ten years ago, and this was before the advent of ChatGPT. So I can’t say I was thinking about this current thing they call AI, which, by the way, feels like a marketing tool to me, because there’s no intelligence there. It’s a really amazing bit of sleight of hand that makes it look like the act of creation is occurring, but really it’s just taking little bits and pieces from a hundred other sources and cobbling them together. There’s no consciousness there. I personally am not a big fan of what passes for AI now. I don’t wish to see it take over the world. I don’t wish to see it subvert the creative process for human beings. But in full disclosure, I was not thinking about it specifically when I came up with this.

Even so, when AI entered the mainstream conversation, you must have seen the resonance.

Yeah. When ChatGPT came out, I was basically appalled. But yeah, I probably was thinking, wow, maybe there’s some resonance with this show...

Breaking Bad famously went from the brink of cancellation to being hailed as one of the greatest television series of all time. Did that experience change how you approached making Pluribus?

It allowed us to make it. It really did. People have asked me recently, are you proud of the fact that you got an original show, a non IP-derived show on the air? And I say: I am proud of that, and I feel lucky, but it also makes me sad. Because I think, why is it so hard to get a show that is not based on pre-existing intellectual property made?"

Copyright and AI Battle for the Future; New York State Bar Association (NYSBA), December 23, 2025

 Nyasha Shani Foy, Temidayo Akinjisola and James Parker , New York State Bar Association (NYSBA); Copyright and AI Battle for the Future

"This article will explore the balance of progress and protection at play stemming from the use of AI that may shape the future of copyright law."

Not Just AI: Traditional Copyright Decisions of 2025 That Should Be on Your Radar; IP Watchdog, December 22, 2025

 JASON BLOOM & MICHAEL LAMBERT , IP Watchdog; Not Just AI: Traditional Copyright Decisions of 2025 That Should Be on Your Radar

"In a year dominated by artificial intelligence (AI) copyright cases, 2025 also featured several influential cases on traditional copyright issues that will impact copyright owners, internet service providers, website owners, advertisers, social media users, media companies, and many others. Although the U.S. Supreme Court did not decide a copyright case this year, it heard argument on secondary liability and willfulness issues in Cox v. Sony. Lower courts continued to wrestle with applying the fair use factors two years after the Supreme Court issued Warhol v. Goldsmith. The divide over whether the “server test” applies to embedded works deepened—and remains unsettled. And the Ninth Circuit further refined the standard for pleading access to online works. This article highlights some of the most important copyright cases from this year and their practical implications."

Sunday, December 21, 2025

Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics; Notre Dame News, December 19, 2025

Carrie Gates and Laura Moran Walton, Notre Dame News ; Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics

"The University of Notre Dame has been awarded a $50.8 million grant from Lilly Endowment Inc. to support the DELTA Network: Faith-Based Ethical Formation for a World of Powerful AI. Led by the Notre Dame Institute for Ethics and the Common Good(ECG), this grant — the largest awarded to Notre Dame by a private foundation in the University’s history — will fund the further development of a shared, faith-based ethical framework that scholars, religious leaders, tech leaders, teachers, journalists, young people and the broader public can draw upon to discern appropriate uses of artificial intelligence, or AI.

The grant will also support the establishment of a robust, interconnected network that will provide practical resources to help navigate challenges posed by rapidly developing AI. Based on principles and values from Christian traditions, the framework is designed to be accessible to people of all faith perspectives.

“We are deeply grateful to Lilly Endowment for its generous support of this critically important initiative,” said University President Rev. Robert A. Dowd, C.S.C. “Pope Leo XIV calls for us all to work to ensure that AI is ‘intelligent, relational and guided by love,’ reflecting the design of God the Creator. As a Catholic university that seeks to promote human flourishing, Notre Dame is well-positioned to build bridges between religious leaders and educators, and those creating and using new technologies, so that they might together explore the moral and ethical questions associated with AI.”

Tuesday, December 16, 2025

The Architects of AI Are TIME’s 2025 Person of the Year; Time, December 11, 2025

 Charlie Campbell.,Andrew R. Chow and Billy Perrigo , Time; The Architects of AI Are TIME’s 2025 Person of the Year

"For decades, humankind steeled itself for the rise of thinking machines. As we marveled at their ability to beat chess champions and predict protein structures, we also recoiled from their inherent uncanniness, not to mention the threats to our sense of humanity. Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe.

This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. “Every industry needs it, every company uses it, and every nation needs to build it,” Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world’s first $5 trillion company, had once again smashed Wall Street’s earnings expectations. “This is the single most impactful technology of our time.” OpenAI’s ChatGPT, which at launch was the fastest-growing consumer app of all time, has surpassed 800 million weekly users. AI wrote millions of lines of code, aided lab scientists, generated viral songs, and spurred companies to re-examine their strategies or risk obsolescence. (OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.)...

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods. Racing both beside and against each other, they placed multibillion-dollar bets on one of the biggest physical infrastructure projects of all time. They reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons."

Monday, December 15, 2025

Kinds of Intelligence | LJ Directors’ Summit 2025; Library Journal, December 2, 2025

 Lisa Peet, Library Journal; Kinds of Intelligence | LJ Directors’ Summit 2025

"LJ’s 2025 Directors’ Summit looked at artificial—and very real—intelligence from multiple angles

If there was any doubt about what issues are on the minds of today’s library leaders, Library Journal’s 2025 Directors’ Summit, held October 16 and 17 at Denver Public Library (DPL), had some ready answers: AI and people.

Nick Tanzi hit both notes handily in his keynote, “Getting Your Public Library AI-Ready.” Tanzi, assistant director of South Huntington Public Library (SHPL), NY, and technology consultant at The-Digital-Librarian.com (and a 2025 LJ Mover & Shaker), began with a reminder of other at-the-time “disruptive” technologies, starting with a 1994 clip of Today Show anchors first encountering “@” and “.com.”

During most of this digital change, he noted, libraries had the technologies before many patrons and could lead the way. Now everyone has access to some form of AI, but it’s poorly understood. And access without understanding is a staff problem as well as a patron problem.

So, what does it mean for a library to be AI-ready? Start with policy and training, said Tanzi, and then translate that to public services, rather than the other way around. Library policies need to be AI-proofed, beginning by looking at what’s already in place and where it might be stressed by AI: policies governing collection development, reconsideration of materials, tool use, access control, the library’s editorial process, and confidential data. Staff are already using some form of AI at work—do they have organizational guidance?

Tanzi advised fostering AI literacy across the library. At SHPL, he formed an AI user group; it has no prerequisite for participation and staff are paid for their time. Members explore new tools, discuss best practices, complete “homework,” and share feedback, which also allows Tanzi to stress-test policies. It’s not a replacement for formal training, but helps him discover which tools work best in various departments and speeds up learning.

We need to demystify AI tools for staff and patrons, Tanzi noted, and teach ethics around them. Your ultimate goal is to create informed citizens; libraries can build community around AI education, partnering with the local school district, colleges, and government."

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

 De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Friday, December 12, 2025

The Disney-OpenAI Deal Redefines the AI Copyright War; Wired, December 11, 2025

BRIAN BARRETT, Wired; The Disney-OpenAI Deal Redefines the AI Copyright War

 "“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag."

Disney's deal with OpenAI is about controlling the future of copyright; engadget, December 11, 2025

 Igor Bonifacic, engadget; Disney's deal with OpenAI is about controlling the future of copyright

"The agreement brings together two parties with very different public stances on copyright. Before OpenAI released Sora, the company reportedly notified studios and talent agencies they would need to opt out of having their work appear in the new app. The company later backtracked on this stance. Before that, OpenAI admitted, in a regulatory filing, it would be "impossible to train today's leading AI models without using copyrighted materials."

By contrast, Disney takes copyright law very seriously. In fact, you could argue no other company has done more to shape US copyright law than Disney. For example, there's the Sonny Bono Copyright Term Extension Act, which is more derisively known as the Mickey Mouse Protection Act. The law effectively froze the advancement of the public domain in the United States, with Disney being the greatest beneficiary. It was only last year that the company's copyright for Steamboat Willie expired, 95 years after Walt Disney first created the iconic cartoon."

Thursday, December 11, 2025

Has Cambridge-based AI music upstart Suno 'gone legit'?; WBUR, December 11, 2025

 

, WBUR; Has Cambridge-based AI music upstart Suno 'gone legit'?

"The Cambridge-based AI music company Suno, which has been besieged by lawsuits from record labels, is now teaming up with behemoth label Warner Music. Under a new partnership, Warner will license music in its catalogue for use by Suno's AI.

Copyright law experts Peter Karol and Bhamati Viswanathan join WBUR's Morning Edition to discuss what the deal between Suno and Warner Music means for the future of intellectual property."

Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos; The New York Times, December 11, 2025

 , The New York Times; Disney Agrees to Bring Its Characters to OpenAI’s Sora Videos

"In a watershed moment for Hollywood and generative artificial intelligence, Disney on Thursday announced an agreement to bring its characters to Sora, OpenAI’s short-form video platform. Videos made with Sora will be available to stream on Disney+ as part of the three-year deal...

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling,” Robert A. Iger, the chief executive of Disney, said in a statement.

Disney is the first major Hollywood company to cross this particular Rubicon."

‘Ruined my Christmas spirit’: McDonald’s removes AI-generated ad after backlash; Agence France-Presse via The Guardian, December 10, 2025

 Agence France-Presse via The Guardian; "Ruined my Christmas spirit’: McDonald’s removes AI-generated ad after backlash

"Melanie Bridge, the chief executive of the Sweetshop Films, the company which made the ad, defended its use of AI in a post on LinkedIn.

“It’s never about replacing craft, it’s about expanding the toolbox. The vision, the taste, the leadership … that will always be human,” she said.

“And here’s the part people don’t see: the hours that went into this job far exceeded a traditional shoot. Ten people, five weeks, full-time.”

But that too sparked online debate.

Emlyn Davies, from the independent production company Bomper Studio, replied to the LinkedIn post: “What about the humans who would have been in it, the actors, the choir?

“Ten people on a project like this is a tiny amount compared to shooting it traditionally live action.”

Coca-Cola recently released its own AI-generated holiday ad, despite receiving backlash when it did the same last year.

The company’s new offering avoids close-ups of humans and mostly features AI-generated images of cute animals in a wintry setting."

Wednesday, December 10, 2025

Friday, December 5, 2025

The New York Times is suing Perplexity for copyright infringement; TechCrunch, December 5, 2025

 Rebecca Bellan , TechCrunch; The New York Times is suing Perplexity for copyright infringement

"The New York Times filed suit Friday against AI search startup Perplexity for copyright infringement, its second lawsuit against an AI company. The Times joins several media outlets suing Perplexity, including the Chicago Tribune, which also filed suit this week."

Thursday, December 4, 2025

OpenAI loses fight to keep ChatGPT logs secret in copyright case; Reuters, December 3, 2025

 , Reuters ; OpenAI loses fight to keep ChatGPT logs secret in copyright case

"OpenAI must produce millions of anonymized chat logs from ChatGPT users in its high-stakes copyright dispute with the New York Times and other news outlets, a federal judge in Manhattan ruled.

U.S. Magistrate Judge Ona Wang in a decision made public on Wednesday said that the 20 million logs were relevant to the outlets' claims and that handing them over would not risk violating users' privacy."

Lawsuit or License?; Columbia Journalism Review, December 4, 2025

 , Columbia Journalism Review; Lawsuit or License?

"Today, the Tow Center for Digital Journalism is releasing a tracker that monitors developments between news publishers and AI companies—including lawsuits, deals, and grants—based on publicly available information."

Wednesday, December 3, 2025

‘The biggest decision yet’; The Guardian, December 2, 2025

 , The Guardian; ‘The biggest decision yet’

"Humanity will have to decide by 2030 whether to take the “ultimate risk” of letting artificial intelligence systems train themselves to become more powerful, one of the world’s leading AI scientists has said.

Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve.

The move could trigger a beneficial “intelligence explosion” – or be the moment humans end up losing control...

He is not alone at Anthropic in voicing concerns. One of his co-founders, Jack Clark, said in October he was both an optimist and “deeply afraid” about the trajectory of AI, which he called “a real and mysterious creature, not a simple and predictable machine”.

Kaplan said he was very optimistic about the alignment of AI systems with the interests of humanity up to the level of human intelligence, but was concerned about the consequences if and when they exceed that threshold."

Tuesday, December 2, 2025

College Students Flock to a New Major: A.I.; The New York Times, December 1, 2025

, The New York Times; College Students Flock to a New Major: A.I.

"Artificial intelligence is the hot new college major...

Now interest in understanding, using and learning how to build A.I. technologies is soaring, and schools are racing to meet rising student and industry demand.

Over the last two years, dozens of U.S. universities and colleges have announced new A.I. departments, majors, minors, courses, interdisciplinary concentrations and other programs.

In 2022, for instance, the Massachusetts Institute of Technology created a major called “A.I. and decision-making.” Students in the program learn to develop A.I. systems and study how technologies like robots interact with humans and the environment. This year, nearly 330 students are enrolled in the program — making A.I. the second-largest major at M.I.T. after computer science.

“Students who prefer to work with data to address problems find themselves more drawn to an A.I. major,” said Asu Ozdaglar, the deputy dean of academics at the M.I.T. Schwarzman College of Computing. Students interested in applying A.I. in fields like biology and health care are also flocking to the new major, she added."

Monday, December 1, 2025

'Technology isn't neutral': Calgary bishop raises ethical questions around AI; Calgary Herald, November 26, 2025

 Devon Dekuyper , Calgary Herald; 'Technology isn't neutral': Calgary bishop raises ethical questions around AI

"We, as human beings, use technology, and we also have to be able to understand it, but also to apply it such that it does not impact negatively the human person, their flourishing (or) society,' said Bishop McGrattan"

Sunday, November 30, 2025

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?; The Conversation, November 24, 2025

Lecturer in Digital and Data Studies, Binghamton University, State University of New York, The Conversation ; More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

"The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.

Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence.

As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short. 

If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?...

If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable.

Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans."