Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, October 27, 2025

AI can help authors beat writer’s block, says Bloomsbury chief; The Guardian, October 27, 2025

, The Guardian; AI can help authors beat writer’s block, says Bloomsbury chief


[Kip Currier: These are interesting and unexpected comments by Nigel Newton, Bloomsbury publishing's founder and CEO. 

Bloomsbury is the publisher of my impending book Ethics, Information, and Technology. In the interest of transparency, I'll note that I researched and wrote my book the "oldfangled way" and didn't use AI for any aspects of my book, including brainstorming. Last year during a check-in meeting with my editor and a conversation about the book's AI chapter, I just happened to learn that Bloomsbury has had a policy on authors not using AI tools.

So it's noteworthy to see this publisher's shift on authors' use of AI tools.]


[Excerpt]

"Authors will come to rely on artificial intelligence to help them beat writer’s block, the boss of the book publisher Bloomsbury has said.

Nigel Newton, the founder and chief executive of the publisher behind the Harry Potter series, said the technology could support almost all creative arts, although it would not fully replace prominent writers.

“I think AI will probably help creativity, because it will enable the 8 billion people on the planet to get started on some creative area where they might have hesitated to take the first step,” he told the PA news agency...

Last week the publisher, which is headquartered in London and employs about 1,000 people, experienced a share rise of as much as 10% in a single day after it reported a 20% jump in revenue in its academic and professional division in the first half of its financial year, largely thanks to an AI licensing agreement.

However, revenues in its consumer division fell by about 20%, largely due to the absence of a new title from Maas."

Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments; AP, October 22, 2025

 MATT O’BRIEN, AP; Reddit sues AI company Perplexity and others for ‘industrial-scale’ scraping of user comments

"Social media platform Reddit sued the artificial intelligence company Perplexity AI and three other entities on Wednesday, alleging their involvement in an “industrial-scale, unlawful” economy to “scrape” the comments of millions of Reddit users for commercial gain.

Reddit’s lawsuit in a New York federal court takes aim at San Francisco-based Perplexity, maker of an AI chatbot and “answer engine” that competes with Google, ChatGPT and others in online search. 

Also named in the lawsuit are Lithuanian data-scraping company Oxylabs UAB, a web domain called AWMProxy that Reddit describes as a “former Russian botnet,” and Texas-based startup SerpApi, which lists Perplexity as a customer on its website.

It’s the second such lawsuit from Reddit since it sued another major AI company, Anthropic, in June.

But the lawsuit filed Wednesday is different in the way that it confronts not just an AI company but the lesser-known services the AI industry relies on to acquire online writings needed to train AI chatbots."

Sunday, October 26, 2025

‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’; The Guardian, October 22, 2025

 , The Guardian; ‘I’m suddenly so angry!’ My strange, unnerving week with an AI ‘friend’

"Do people really want an AI friend? Despite all the articles about individuals falling in love with chatbots, research shows most people are wary of AI companionship. A recent Ipsos poll found 59% of Britons disagreed “that AI is a viable substitute for human interactions”. And in the US, a 2025 Pew survey found that 50% of adults think AI will worsen people’s ability to form meaningful relationships.

I wanted to see for myself what it would be like to have a tiny robot accompanying me all day, so I ordered a Friend ($129) and wore it for a week."

Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.; The New York Times, October 24, 2025

 , The New York Times; Smart Beds Helped Them Sleep on a Cloud. Then the Cloud Crashed.


[Kip Currier: Another interesting example -- probably surprising for most of us who don't have "smart beds", including me -- of the ways that smart devices and the Internet of Things (IoT) can impact us. In this instance, people's sleep!

The paperback version of my book, Ethics, Information, and Technology, is available via Amazon on November 13, 2025 (link here too) and has a significant section on the ethical issues implicated by IoT and smart devices.]


[Excerpt]

"Some users of the smart-bed system Eight Sleep, who sleep atop a snug, temperature-regulating mattress cover in search of “zero-gravity rest,” were rousted from their slumber earlier this week for a surprising reason.

Eight Sleep’s collections of smart products, which the company calls “Pods,” and include those “intelligent” mattress covers, were affected by an outage involving the cloud-storage provider Amazon Web Services, which sent large sectors of the internet into disarray on Monday.

The outage, which lasted more than two hours, took down websites for banks, gaming sites and entertainment services, as well as the messaging service WhatsApp. But it also affected people trying to get some shut-eye.

(First, to answer a question readers might have: Yes, there are smart mattress covers, just as there are smart watches, smart door locks and smart refrigerators.)"

Saturday, October 25, 2025

A.I. ‘made-to-order truth’ is not truth, LDS apostle tells faith leaders in Vatican City; The Salt Lake Tribune, October 23, 2025

Dylan Eubank, The Salt Lake Tribune; A.I. ‘made-to-order truth’ is not truth, LDS apostle tells faith leaders in Vatican City

"Speaking this week at a faith and technology summit in Vatican City, apostle Gerrit W. Gong of The Church of Jesus Christ of Latter-day Saints pleaded with religious leaders around the globe to champion ethical and moral use of artificial intelligence.

At the gathering, Gong doubled down on his counsel for careful use of A.I., especially in regards to misinformation and religious inaccuracies...

The Utah-based faith’s first and only Asian American apostle also announced that an A.I. team has begun prototyping and testing various faith and ethics evaluations. This team is collaborating with “socially responsible” focused A.I. companies, along with religious institutions and universities such as Baylor, Brigham Young, Notre Dame and Yeshiva universities...

“Portraying faith traditions accurately and respectfully is not an imposition of religion on A.I. Rather, it is a public necessity,” Gong said. “It is especially needed as increasing numbers of individuals ask A.I. about faith and belief, and as A.I. becomes a primary source of information about faith traditions.”"

Tuesday, October 21, 2025

Gambling. Investing. Gaming. There’s No Difference Anymore.; The New York Times, October 20, 2025

Jonathan D. Cohen and , The New York Times ; Gambling. Investing. Gaming. There’s No Difference Anymore.


[Kip Currier: It's good to see online gambling issues getting more attention, as in this 10/20/25 New York Times Op-Ed. One of the piece's writers is Jonathan D. Cohen, author of the 2025 book Losing Big: America’s Reckless Bet on Sports Gambling".

I spoke on these issues in my talk -- AI Gambling Thirst Traps and God: Christian Imperatives, Church Roles, and Ethical Responsibilities -- at the September 2-5, 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis. A publication based on the talk is forthcoming.]


[Excerpt]

"If it feels as if gambling is everywhere, that’s because it is. But today’s gamblers aren’t just retirees at poker tables. They’re young men on smartphones. And thanks to a series of quasi-legal innovations by the online wagering industry, Americans can now bet on virtually anything from their investment accounts. 

In recent years, this industry has been gamifying the investing experience; on brightly colored smartphone apps, risking your money is as easy and attractive as playing Candy Crush. On the app of the investment brokerage Robinhood, users can now buy stocks on one tab, “bet” on Oscars outcomes on another, and trade crypto on a third.

Given a recent explosion in unsafe gambling and growing evidence of severe financial harm, one might ask whether the government should be permitting 18-year-olds to effectively bet on the Dallas Cowboys with the same accounts they can use to invest in Coca-Cola. Under President Trump, who has a son serving as an adviser to two entrants in the sports prediction marketplace, the answer appears to be a firm yes."

Monday, October 20, 2025

‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI; The Guardian, October 20, 2025

  , The Guardian; ‘Every kind of creative discipline is in danger’: Lincoln Lawyer author on the dangers of AI

"The writer has his own battles with AI. He is part of a collective of authors, including Jonathan Franzen, Jodi Picoult and John Grisham, suing OpenAI for copyright infringement...

Connelly has pledged $1m (£746m) to combat the wave of book bans sweeping through his home state of Florida. He said he felt moved to do something after he learned that Harper Lee’s To Kill A Mockingbird, which had been influential to him, was temporarily removed from classrooms in Palm Beach County.

“I had to read that book to be what I am today. I would have never written a Lincoln Lawyer without it,” he said. He was also struck when Stephen Chbosky’s coming of age novel The Perks of Being a Wallflower, “which meant a lot to my daughter”, received a ban.

He and his wife, Linda McCaleb, help fund PEN America’s Miami office countering book bans. “It’s run by a lawyer who then tries to step in, usually by filing injunctions against school boards,” he said. “I don’t believe anyone has any right to tell some other kid they can’t read something, to usurp another parent’s oversight of their children.”"

Monday, October 13, 2025

US Supreme Court asked to hear dispute over copyrights for AI creations; Reuters, October 10, 2025

 , Reuters; US Supreme Court asked to hear dispute over copyrights for AI creations

 "A computer scientist on Friday asked the U.S. Supreme Court to reconsider a ruling that a work of art generated by artificial intelligence cannot be copyrighted under U.S. law.

Stephen Thaler told the justices that the U.S. Copyright Office's decision denying copyright protection for the art made by his AI system "created a chilling effect on anyone else considering using AI creatively" and "defies the constitutional goals from which Congress was empowered to create copyright.""

Sunday, October 12, 2025

Tilly Norwood & AI Confusion Will Shape Looming Guild Negotiations, Copyright Experts Agree; Deadline, October 12, 2025

Dade Hayes , Deadline; Tilly Norwood & AI Confusion Will Shape Looming Guild Negotiations, Copyright Experts Agree

"Handel and Mishawn Nolan, managing partner of intellectual property law firm Nolan Heimann, shared their perspectives during a panel Friday afternoon at Infinity Festival in Los Angeles.

Digital scanning of human actors, for the purposes of using their likenesses in film and TV projects is another tricky area for the unions given how untested the legal questions are, the attorneys agreed.

“I actually have a client right now” whose body is being scanned, Nolan said. “What I received [from the company] was just a sort of standard certificate of engagement. It was all rights, just like you would normally use. And I said, ‘Well, what are you gonna do with the data? What is the scope of the use?’”

Because of the intense pressure on productions to move quickly, Nolan said, “everyone would like to just turn around [a talent agreement] tomorrow.” But the complexities of copyright issues raised by AI, which is evolving at a breakneck clip, require a lot more thought, she argued. “The way that we’ve always done business can’t be done in the future. It can’t be done instantaneously,” she continued. “You have to take a moment and think about, what are you doing? What are you capturing? What are you going to use it for? How are you going to use it? How long are you going to have access to it? And what happens in the long term? Who holds onto it? Is it safe? Is it gonna be destroyed?”"

Friday, October 10, 2025

Here's who owns what when it comes to AI, creativity and intellectual property; World Economic Forum, October 10, 2025

Seemantani SharmaCo-Founder, Mabill Technologies | Intellectual Property & Innovation Expert, Mabill Technologies, World Economic Forum ; Here's who owns what when it comes to AI, creativity and intellectual property

"Rethinking ownership

The intersection of AI, consciousness and intellectual property requires us to rethink how ownership should evolve. Keeping intellectual property strictly human-centred safeguards accountability, moral agency and the recognition of human creativity. At the same time, acknowledging AI’s expanding role in production may call for new approaches in law. These could take the form of shared ownership models, new categories of liability or entirely new rights frameworks.


For now, the legal balance remains with humans. As long as AI lacks consciousness, it cannot be considered a rights-holder under existing intellectual property theories. Nonetheless, as machine intelligence advances, society faces a pivotal choice. Do we reinforce a human-centred system to protect dignity and creativity or do we adapt the law to reflect emerging realities of collaboration between humans and machines?


This is more than a legal debate. It is a test of how much we value human creativity in an age of intelligent machines. The decisions we take today will shape the future of intellectual property and the meaning of authorship, innovation and human identity itself."

Monday, October 6, 2025

As sports betting explodes, experts push for a public health approach to addiction; NPR, September 30, 2025

 , NPR; As sports betting explodes, experts push for a public health approach to addiction

"RICHARD BLUMENTHAL: The sophistication and complexity of betting has become staggering.

BROWN: That's U.S. Democratic Senator Richard Blumenthal of Connecticut. He's co-sponsor of the SAFE Bet Act, which would impose federal standards on sports gambling, like no advertising during live sports and no tempting bonus bet promotions.

BLUMENTHAL: States are unable to protect their consumers from excessive and abusive offers and sometimes misleading pitches. They simply don't have the resources or the jurisdiction.

BROWN: The gambling industry is lobbying against the bill. Joe Maloney is with the American Gaming Association. He says federal rules would be a slap in the face to state regulators.

JOE MALONEY: You have the potential to just dramatically, one, usurp the state's authority and then, two, freeze the industry in place.

BROWN: He says the industry acknowledges that gambling is addictive for some people, which is why it developed a model called Responsible Gaming. That includes messages warning people to stop playing when it's no longer fun, and reminding them the odds are very low.

MALONEY: And there's very direct messages, such as, you will lose money here."

Sunday, October 5, 2025

America goes gambling; Quartz, October 5, 2025

Jackie Snow, Quartz; America goes gambling


[Kip Currier: This Quartz article America Goes Gambling is a timely one about a significant AI-driven development: massive growth in online gambling, sports betting, and gambling addictions after the U.S. Supreme Court struck down a de facto ban on these activities (outside of Nevada and tribal casinos) in 2018's Murphy v. NCAA decision.

I spoke on the issue of AI-enhanced online gambling and sports betting at the September 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis and am currently finishing a chapter for publication on this emerging topic.]


[Excerpt]

"On any given Sunday this football season, Americans are placing millions in legal sports bets, a level of widespread wagering that would have been almost impossible a decade ago when only Nevada offered legal sportsbooks.

Today's football slate represents the peak of a sports betting boom that has fundamentally altered how Americans watch games. Sunday's action is part of an industry that's grown from $4.9 billion in total annual wagers in 2017 to almost $150 billion in 2024. But beneath the Sunday spectacle lies a growing concern about addiction specialists reporting record demand for gambling help as the line between sports entertainment and financial risk becomes increasingly blurred.

The transformation has been swift and dramatic. When the Supreme Court struck down the federal sports betting ban in Murphy v. NCAA in 2018, legal sports betting was confined to Nevada and tribal casinos. Today, legal sports betting operates in 39 states and Washington, D.C., with more statehouses considering laws that would greenlight it."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation; The Guardian, October 4, 2025

Interview by  , The Guardian; How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation


[Kip Currier: One of the most insightful, nuanced, enlightening pieces I've read amongst thousands this year. I've followed and admired the work and wisdom of Maria Ressa and Yuval Noah Harari but wasn't familiar with UK academic and politician Rory Stewart who makes interesting contributions to this joint interview. They all individually and collectively identify in clear-eyed fashion what's going on in the world today, what the stakes are, and what each of us can do to try to make some kind of positive difference.

I shared this article with others in my network and encourage you to do the same, so these beneficial, thought-provoking perspectives can be read by as many as possible.]


[Excerpt]

"What happens when an internationally bestselling historian, a Nobel peace prize-winning journalist and a former politician get together to discuss the state of the world, and where we’re heading? Yuval Noah Harari is an Israeli medieval and military historian best known for his panoramic surveys of human history, including Sapiens, Homo Deus and, most recently, Nexus: A Brief History of Information Networks from the Stone Age to AI. Maria Ressa, joint winner of the Nobel peace prize, is a Filipino and American journalist who co-founded the news website Rappler. And Rory Stewart is a British academic and former Conservative MP, writer and co-host of The Rest Is Politics podcast. Their conversation ranged over the rise of AI, the crisis in democracy and the prospect of a Trump-Putin wedding, but began by considering a question central to all of their work: how to live a good life in an increasingly fragmented and fragile world?...

YNH I think that more people need to realise that we have to do the hard work ourselves. There is a tendency to assume that we can rely on reality to do the job for us. That if there are people who talk nonsense, who support illogical policies, who ignore the facts, sooner or later, reality will wreak vengeance on them. And this is not the way that history works.

So if you want the truth, and you want reality to win, each of us has to do some of the hard work ourselves: choose one thing and focus on that and hope that other people will also do their share. That way we avoid the extremes of despair."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

, The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Wednesday, September 24, 2025

Copyright and AI: Controlling Rights and Managing Risks; Morgan Lewis, September 23, 2025

JOSHUA M. DALTON, Partner, BostonCOLLEEN GANIN, Partner, New YorkMICHAEL R. PFEUFFER, Senior Attorney, Pittsburgh, Morgan Lewis; Copyright and AI: Controlling Rights and Managing Risks

"The law on copyright and AI is still developing, with courts and policymakers testing the limits of authorship, infringement, and fair use. Companies should expect continued uncertainty and rapid change in this space."

AI Influencers: Libraries Guiding AI Use; Library Journal, September 16, 2025

 Matt Enis, Library Journal ; AI Influencers: Libraries Guiding AI Use

"In addition to the field’s collective power, libraries can have a great deal of influence locally, says R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin and cohost of LJ’s Libraries Lead podcast.

“Right now, the place where librarians and libraries could have the most impact isn’t on trying to change OpenAI or Microsoft or Google; it’s really in looking at implementation policy,” Lankes says. For example, “on the public library side, many cities and states are adopting AI policies now, as we speak,” Lankes says. “Where I am in Austin, the city has more or less said, ‘go forth and use AI,’ and that has turned into a mandate for all of the city offices, which in this case includes the Austin Public Library” (APL). 

Rather than responding to that mandate by simply deciding how the library would use AI internally, APL created a professional development program to bring its librarians up to speed with the technology so that they can offer other city offices help with ways to use it, and advice on how to use it ethically and appropriately, Lankes explains.

“Cities and counties are wrestling with AI, and this is an absolutely perfect time for libraries to be part of that conversation,” Lankes says."

AI as Intellectual Property: A Strategic Framework for the Legal Profession; JD Supra, September 18, 2025

Co-authors:James E. Malackowski and Eric T. Carnick , JD Supra; AI as Intellectual Property: A Strategic Framework for the Legal Profession

"The artificial intelligence revolution presents the legal profession with its most significant practice development opportunity since the emergence of the internet. AI spending across hardware, software, and services reached $279.22 billion in 2024 and is projected to grow at a compound annual growth rate of 35.9% through 2030, reaching $1.8 trillion.[i] AI is rapidly enabling unprecedented efficiencies, insights, and capabilities in industry. The innovations underlying these benefits are often the result of protectable intellectual property (IP) assets. The ability to raise capital and achieve higher valuations can often be traced back to such IP. According to data from Carta, startups categorized as AI companies raised approximately one-third of total venture funding in 2024. Looking only at late-stage funding (Series E+), almost half (48%) of total capital raised went to AI companies.[ii]Organizations that implement strategic AI IP management can realize significant financial benefits.

At the same time, AI-driven enhancements have introduced profound industry risks, e.g., disruption of traditional business models; job displacement and labor market reductions; ethical and responsible AI concerns; security, regulatory, and compliance challenges; and potentially, in more extreme scenarios, broad catastrophic economic consequences. Such risks are exacerbated by the tremendous pace of AI development and adoption, in some cases surpassing societal understanding and regulatory frameworks. According to McKinsey, 78% of respondents say their organizations use AI in at least one business function, up

from 72% in early 2024 and 55% a year earlier.[iii]

This duality—AI as both a catalyst and a disruptor—is now a feature of the modern global economy. There is an urgent need for legal frameworks that can protect AI innovation, facilitate the proper commercial development and deployment of AI-related IP, and navigate the risks and challenges posed by this new technology. Legal professionals who embrace AI as IP™ will benefit from this duality. Early indicators suggest significant advantages for legal practitioners who develop specialized AI as IP expertise, while traditional IP practices may face commoditization pressures."

Monday, September 22, 2025

If Anyone Builds it, Everyone Dies review – how AI could kill us all; The Guardian; September 22, 2025

 , The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all

"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."

Friday, September 19, 2025

The 18th-century legal case that changed the face of music copyright law; WIPO Magazine, September 18, 2025

Eyal Brook, Partner, Head of Artificial Intelligence, S. Horowitz & Co , WIPO Magazine;The 18th-century legal case that changed the face of music copyright law

"As we stand at the threshold of the AI revolution in music creation, perhaps the most valuable lesson from this history is not any particular legal doctrine but rather the recognition that our conceptions of musical works and authorship are not fixed but evolving.

Imagine what would have happened had Berne negotiators decided to define the term in 1886. The “musical work” as a legal concept was born from Johann Christian Bach’s determination to assert his creative rights – and it continues to transform with each new technological development and artistic innovation.

The challenge for copyright law in the 21st century is to keep fulfilling copyright’s fundamental purpose: to recognize and reward human creativity in all its forms. This will require not just legal ingenuity but also a willingness to reconsider our most basic assumptions about what music is and how it comes into being.

Bach’s legacy, then, is not just the precedent that he established but the ongoing conversation he initiated – an unfinished symphony of legal thought that continues to evolve with each new technological revolution and artistic movement.

As we face the challenges of AI and whatever technologies may follow, we would do well to remember that the questions we ask today about ownership and creativity echo those first raised in a London courtroom almost 250 years ago by a composer determined to claim what he believed was rightfully his."