Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, February 5, 2026

When AI and IP Collide: What Journalists Need to Know; National Press Foundation (NPF), January 22, 2026

 National Press Foundation (NPF); When AI and IP Collide: What Journalists Need to Know

"With roughly 70 federal lawsuits waged against AI developers, the intersection of technology and intellectual property is now one of the most influential legal beats. Courts are jumping in to define the future of “fair use.” To bridge the gap between complex legal proceedings and the public’s understanding, NPF held a webinar to unpack these intellectual property battles. One thing all of the expert panelists agreed on: most cases are either an issue of input – i.e. what the AI models pull in to train on – or output – what AI generates, as in the case of Disney and other Hollywood studios v. Midjourney.

“The behavior here of AI companies and the assertion of fair use is completely understandable in our market capitalist system – all players want something very simple. They want their inputs for little or nothing and their outputs to be very expensive,” said Loyola Law professor Justin Hughes. “The fair use argument is all about AI companies wanting their inputs to be free, just like ranchers want their grazing land from the federal government to be free or their mining rights to be free.” AI Copyright Cases Journalists Should Know: Bartz et al. v. Anthropic: Anthropic reached a $1.5 billion settlement in a landmark case for the industry after a class of book authors accused the company of using pirated books to train the Claude AI model. “The mere training itself may be fair use, but the retention of these large copy data sets and their replication or your training from having taken pirated data sets, that’s not fair use,” Hughes explained. The NYT Company v. Microsoft Corporation et al.: This is a massive multi-district litigation in New York where the NYT is suing OpenAI. The Times has pushed for discovery into over 20 million private ChatGPT logs to prove that this model is being used to get past paywalls. Advance Local Media LLC et al. v. Cohere Inc.: The case against the startup Cohere is particularly vital for newsrooms as a judge ruled that AI-generated summaries infringe of news organizations’ ability to get traffic on their sites. “We’ve seen, there’s been a lot of developers who have taken the kind of classic Silicon Valley approach of ask forgiveness rather than permission,” said Terry Hart, general counsel of the Association of American Publishers. “They have gone ahead and trained a lot of models using a lot of copyrighted works without authorization.” Tech companies have trained massive models to ingest the entirety of the internet, including articles, without prior authorization, and Hughes points out that this is a repeated occurrence. AI companies often keep unauthorized copies of these vast datasets to retrain and tweak their models, leading to multiple steps of reproduction that could violate copyright. AI and U.S. Innovation A common defense from tech companies in Silicon Valley is that using these vast amounts of data is necessary to U.S. innovation and keeping the economy competitive. “‘We need to beat China, take our word for it, this is going to be great, and we’re just going to cut out a complete sector of the economy that’s critical to the success of our models,'” Hart said. “In the long run, that’s not good for innovation. It’s not good for the creative sectors and it’s not good for the AI sector.” Reuters technology reporter Deepa Seetharaman has also heard the China competition argument, among others. “The metaphor that I’ll hear a lot here is, ‘it’s like somebody visiting a library and reading every book, except this is a system that can remember every book and remember all the pieces of every book. And so why are you … harming us for developing something that’s so capable?'” Seetharaman said. Hughes noted that humans are not walking into a library with a miniature high-speed photocopier to memorize every book. Humans don’t memorize with the “faithful” precision of a machine. Hart added that the metaphor breaks down because technology has created a new market space that isn’t comparable to a human reader. Speakers:
  • Wayne Brough, Resident Senior Fellow, Technology and Innovation Team, R Street
  • Terry Hart, General Counsel, Association of American Publishers
  • Justin Hughes, Honorable William Matthew Byrne Distinguished Professor of Law, Loyola Marymount University
  • Deepa Seetharaman, Tech Correspondent, Reuters
Summary and transcript: https://nationalpress.org/topic/when-... This event is sponsored by The Copyright Alliance and NSG Next Solutions Group. This video was produced within the Evelyn Y. Davis studios. NPF is solely responsible for the content."

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI; The Guardian, February 5, 2026

Anuj Behal, The Guardian ; ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI


[Kip Currier: The largely unaddressed plight of content moderators became more real for me after reading this haunting 9/9/24 piece in the Washington Post, "I quit my job as a content moderator. I can never go back to who I was before."

As mentioned in the graphic article's byline, content moderator Alberto Cuadra spoke with journalist Beatrix Lockwood. Maya Scarpa's illustrations poignantly give life to Alberto Cuadra's first-hand experiences and ongoing impacts from the content moderation he performed for an unnamed tech company. I talk about Cuadra's experiences and the ethical issues of content moderation, social media, and AI in my Ethics, Information, and Technology book.]


[Excerpt]

"Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.

On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm.

This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of a workforce often described as “ghost workers”.

“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her...

“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.

A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted."

Tuesday, February 3, 2026

Pay More Attention to A.I.; The New York Times, January 31, 2026

 ROSS DOUTHAT , The New York Times; Pay More Attention to A.I.

"Unfortunately everyone I talk with offers conflicting reports. There are the people who envision A.I. as a revolutionary technology, but ultimately merely akin to the internet in its effects — the equivalent, let’s say, of someone telling you that the Indies are a collection of interesting islands, like the Canaries or the Azores, just bigger and potentially more profitable.

Then there are the people who talk about A.I. as an epoch-making, Industrial Revolution-level shift — which would be the equivalent of someone in 1500 promising that entire continents waited beyond the initial Caribbean island chain, and that not only fortunes but empires and superpowers would eventually rise and fall based on initial patterns of exploration and settlement and conquest.

And then, finally, there are the people with truly utopian and apocalyptic perspectives — the Singularitarians, the A.I. doomers, the people who expect us to merge with our machines or be destroyed by them. Think of them as the equivalent of Ponce de Leon seeking the Fountain of Youth, envisioning the New World as a territory where history fundamentally ruptures and the merely human age is left behind."

The Copyright Conversation; Library Journal, February 3, 2026

 Hallie Rich, Library Journal; The Copyright Conversation

"Welcome to the Library Journal Roundtable. The theme for today is copyright. The context is libraries. My name is Jim Neal. I’m University Librarian Emeritus at Columbia University in New York and Senior Policy Fellow at the American Library Association. I will serve as the moderator.

Allow me to introduce the members of the panel. Jonathan Band is the counsel to the Library Copyright Alliance. He works with the American Library Association and the Association of Research Libraries. Sara Benson is Associate Professor and Copyright Librarian at the University of Illinois Library. She’s also an affiliate professor at the School of Information of the Siebel Center for Design, the European Union Center and the Center for Global Studies. Rick Anderson is the University Librarian at Brigham Young University. Kyle Courtney is Director of Copyright and Information Policy at Harvard and founder of two library nonprofits, Library Futures and the eBook Study Group.

All of these individuals are copyright and information policy experts with years and years of deep involvement in education and advocacy around the importance of copyright for libraries, the laws and legislation which influence our work in libraries."

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

, The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"

Monday, February 2, 2026

Move Fast, but Obey the Rules: China’s Vision for Dominating A.I.; The New York Times, February 2, 2026

Meaghan Tobin and  , The New York Times; Move Fast, but Obey the Rules: China’s Vision for Dominating A.I.

"Mr. Xi’s remarks highlight a tension shaping China’s tech industry. China’s leadership has decided that A.I. will drive the country’s economic growth in the next decade. At the same time, it cannot allow the new technology to disrupt the stability of Chinese society and the Communist Party’s hold over it.

The result is that the government is pushing Chinese A.I. companies to do two things at once: move fast so China can outpace international rivals and be at the forefront of the technological shift, while complying with an increasingly complex set of rules."

Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions.; The New York Times, February 2, 2026

 The New York Times ; Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions.

"People have been working on artificial intelligence for decades. But five years ago, few were predicting that A.I. would break through as the most important technology story of the 2020s — and quite possibly the century. Large language models have turned A.I. into a household topic, but all areas of A.I. have taken great leaps forward.

Now, we are inundated with chatter about how much A.I. will transform our lives and our world. Already, companies are trying to find ways to offload tasks and even entire jobs to A.I. More people are turning to A.I. for social interaction and mental health support. Educators are scrambling to manage students’ increased reliance on these tools. And in the near future A.I. may lead to breakthroughs in drug discovery and energy; it could allow more people to create art and cultural works — or turn these industries into slop factories.

As society wrestles with whether A.I. will lead us into a better future or catastrophic one, Times Opinion turned to eight experts for their predictions on where A.I. may go in the next five years. Listening to them may help us bring out the best and mitigate the worst out of this new technology."

Saturday, January 31, 2026

Copyright and creativity in Episode 2 of the EUIPO Podcast; European Union Intellectual Property Office (EUIPO), January 28, 2026

European Union Intellectual Property Office (EUIPO); Copyright and creativity in Episode 2 of the EUIPO Podcast

"Copyright and creativity in Episode 2 of the EUIPO Podcast

The European Union Intellectual Property Office (EUIPO) has released the second episode of its podcast series ‘Creative Sparks: From inspiration to innovation’, focusing on copyright and the launch of the EUIPO Copyright Knowledge Centre.

Titled “The idea makers: Europe’s new home for copyright”, the episode looks at how copyright supports creativity across Europe. From music, film and publishing to design, digital content and emerging technologies such as generative artificial intelligence.

It brings together institutional and creator perspectives through two guests: Véronique Delforge, copyright legal expert at the EUIPO, and Nathalie Boyer, actress, voice-over artist, Board member of ADAMI and President of the ADAMI Foundation for the Citizen Artist. They discuss creative innovation, why copyright remains essential in a rapidly evolving creative landscape and how creators can better understand and exercise their rights.

The conversation highlights the growing complexity of copyright in a digital and cross-border environment, the specific challenges faced by performers and cultural organisations, and the need for clarity, transparency and trusted information. Particular attention is given to the impact of streaming platforms and generative AI on creative works, authorship and remuneration.

The episode also introduces the EUIPO Copyright Knowledge Centre, launched to bring together reliable information, research, tools and resources in one place.

Making IP closer

The podcast is part of the EUIPO’s determination to make intellectual property more accessible to all and engaging for Europeans, businesses and creators.

The EUIPO will issue monthly episodes and explore topics related to creativity and intellectual property as a tool to foster innovation and enhance competitiveness in EU in the digital era, among many others."

Wednesday, January 28, 2026

Copyrighted art, mobile phones, Greenland: welcome to our age of shameless theft; The Guardian, January 28, 2026

  , The Guardian; Copyrighted art, mobile phones, Greenland: welcome to our age of shameless theft

"Last week I discovered that an article I wrote about the England cricket team has already been copied and repackaged, verbatim and without permission, by an Indian website. What is the appropriate response here? Decry and sue? Shrug and move on? I ponder the question as I stroll through my local supermarket, where the mackerel fillets are wreathed in metal security chains and the dishwasher tabs have to be requested from the storeroom like an illicit little treat.

On the way home, I screenshot and crop a news article and share it to one of my WhatsApp groups. In another group, a family member has posted an AI-generated video (“forwarded many times”) of Donald Trump getting his head shaved by Xi Jinping while Joe Biden laughs in the background. I watch the mindless slop on my phone as I walk along the main road, instinctively gripping my phone a little tighter as I do so.

Increasingly, by small and imperceptible degrees, we seem to live in a world defined by petty theft; petty not in its scale or volume but by its sense of entitlement and impunity. A joke, a phone, an article, the island of Greenland, the entire canon of published literature, a bag of dishwasher tablets: everything, it seems, is fair game. How did we get to this point, and where does it lead us?"

Tuesday, January 27, 2026

High Court Shouldn’t Weigh AI’s Copyright Author Status, US Says; Bloomberg Law, January 26, 2026

 

, Bloomberg Law; High Court Shouldn’t Weigh AI’s Copyright Author Status, US Says

"The US Solicitor General advised the US Supreme Court not to take up a computer scientist’s petition to consider whether AI could be an author under copyright law.

A decision foreclosing nonhuman authorship for Stephen Thaler’s Creativity Machine didn’t conflict with any in other circuits or raise complicated questions about protections for artificial intelligence-assisted work by human authors, the Jan. 23 filing said."

A Lecture on Faith, Ethics and Artificial Intelligence; Episcopal News Service, Lecture: Saturday, March 7, 11 AM EST

 Episcopal News Service; A Lecture on Faith, Ethics and Artificial Intelligence

"Join Grace Church Brooklyn Heights as we welcome Dr. Isaac B. Sharp for a lecture on faith, ethics and artificial intelligence addressing the question: What does Christian Ethics have to say about the promises and pitfalls of artificial intelligence, engaging questions of justice, agency and moral responsibility? The lecture will take place on Saturday, March 7th at Grace Church (254 Hicks Street, Brooklyn, NY 11201) at 11am. A light lunch will be provided. Please click here to register. For more information, please email The Rev. Leandra Lisa Lambert at LLambert@gracebrooklyn.org

Monday, January 26, 2026

Behind the Curtain: Anthropic's warning to the world; Axios, January 26, 2026

 Jim VandeHei, Mike Allen, Axios; Behind the Curtain: Anthropic's warning to the world

"Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.

  • In a 38-page essay, shared with us in advance of Monday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."

  • "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

Why it matters: Amodei's company has built among the most advanced LLM systems in the world. 


  • Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites. 

  • AI is doing 90% of the computer programming to build Anthropic's products, including its own AI.

Amodeione of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" — was written to jar others, provoke a public debate and detail the risks.


  • Amodei insists he's optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.

Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter.""

Search Engines, AI, And The Long Fight Over Fair Use; Electronic Frontier Foundation (EFF), January 23, 2026

 JOE MULLIN , Electronic Frontier Foundation (EFF); Search Engines, AI, And The Long Fight Over Fair Use

"We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works."

Saturday, January 24, 2026

Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case; The Fashion Law (TFL), January 23, 2026

 TFL, The Fashion Law (TFL); Copyright Office Doubles Down on AI Authorship Stance in the Midjourney Case

"The U.S. Copyright Office is standing firm in its position that works generated by artificial intelligence (“AI”), even when refined or curated by a human user, do not qualify for copyright protection unless the human author clearly limits their claim to their own original contributions. In a newly filed response and cross-motion for summary judgment, the Office is asking a federal court in Colorado to deny artist Jason Allen’s motion for summary judgment and uphold its refusal to register the work at issue, arguing that the dispute turns on the Copyright Act’s long-established human authorship requirement and not hostility toward AI."

Friday, January 23, 2026

It Makes Sense That People See A.I. as God; The New York Times, January 23, 2026

 , The New York Times; It Makes Sense That People See A.I. as God

"More and more, when it comes to our relationships with A.I. and the complex algorithms that shape so much of our modern subjectivity, we have slipped into the language and habits of mind we normally reserve for deities. And even people who do not make an explicit connection between A.I. and religion engage a kind of religious mode around the new technology."

Wednesday, January 21, 2026

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026

 and  , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss

"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.

While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...

Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.

Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...

Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

 National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

 Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Sunday, January 18, 2026

Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’; Variety, January 14, 2026

 Todd Spangler, Variety ; Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’

"Matthew McConaughey’s lawyers want you to know that using AI to replicate the actor’s famous catchphrase is not “alright, alright, alright.”

Attorneys for entertainment law firm Yorn Levine representing McConaughey have secured eight trademarks from the U.S. Patent and Trademark Office over the last several months for their client, which they said is aimed at protecting his voice and likeness from unauthorized AI misuse."