Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, October 10, 2025

Here's who owns what when it comes to AI, creativity and intellectual property; World Economic Forum, October 10, 2025

Seemantani SharmaCo-Founder, Mabill Technologies | Intellectual Property & Innovation Expert, Mabill Technologies, World Economic Forum ; Here's who owns what when it comes to AI, creativity and intellectual property

"Rethinking ownership

The intersection of AI, consciousness and intellectual property requires us to rethink how ownership should evolve. Keeping intellectual property strictly human-centred safeguards accountability, moral agency and the recognition of human creativity. At the same time, acknowledging AI’s expanding role in production may call for new approaches in law. These could take the form of shared ownership models, new categories of liability or entirely new rights frameworks.


For now, the legal balance remains with humans. As long as AI lacks consciousness, it cannot be considered a rights-holder under existing intellectual property theories. Nonetheless, as machine intelligence advances, society faces a pivotal choice. Do we reinforce a human-centred system to protect dignity and creativity or do we adapt the law to reflect emerging realities of collaboration between humans and machines?


This is more than a legal debate. It is a test of how much we value human creativity in an age of intelligent machines. The decisions we take today will shape the future of intellectual property and the meaning of authorship, innovation and human identity itself."

Monday, October 6, 2025

As sports betting explodes, experts push for a public health approach to addiction; NPR, September 30, 2025

 , NPR; As sports betting explodes, experts push for a public health approach to addiction

"RICHARD BLUMENTHAL: The sophistication and complexity of betting has become staggering.

BROWN: That's U.S. Democratic Senator Richard Blumenthal of Connecticut. He's co-sponsor of the SAFE Bet Act, which would impose federal standards on sports gambling, like no advertising during live sports and no tempting bonus bet promotions.

BLUMENTHAL: States are unable to protect their consumers from excessive and abusive offers and sometimes misleading pitches. They simply don't have the resources or the jurisdiction.

BROWN: The gambling industry is lobbying against the bill. Joe Maloney is with the American Gaming Association. He says federal rules would be a slap in the face to state regulators.

JOE MALONEY: You have the potential to just dramatically, one, usurp the state's authority and then, two, freeze the industry in place.

BROWN: He says the industry acknowledges that gambling is addictive for some people, which is why it developed a model called Responsible Gaming. That includes messages warning people to stop playing when it's no longer fun, and reminding them the odds are very low.

MALONEY: And there's very direct messages, such as, you will lose money here."

Sunday, October 5, 2025

America goes gambling; Quartz, October 5, 2025

Jackie Snow, Quartz; America goes gambling


[Kip Currier: This Quartz article America Goes Gambling is a timely one about a significant AI-driven development: massive growth in online gambling, sports betting, and gambling addictions after the U.S. Supreme Court struck down a de facto ban on these activities (outside of Nevada and tribal casinos) in 2018's Murphy v. NCAA decision.

I spoke on the issue of AI-enhanced online gambling and sports betting at the September 2025 Faithful Futures: Guiding AI with Wisdom and Witness conference in Minneapolis and am currently finishing a chapter for publication on this emerging topic.]


[Excerpt]

"On any given Sunday this football season, Americans are placing millions in legal sports bets, a level of widespread wagering that would have been almost impossible a decade ago when only Nevada offered legal sportsbooks.

Today's football slate represents the peak of a sports betting boom that has fundamentally altered how Americans watch games. Sunday's action is part of an industry that's grown from $4.9 billion in total annual wagers in 2017 to almost $150 billion in 2024. But beneath the Sunday spectacle lies a growing concern about addiction specialists reporting record demand for gambling help as the line between sports entertainment and financial risk becomes increasingly blurred.

The transformation has been swift and dramatic. When the Supreme Court struck down the federal sports betting ban in Murphy v. NCAA in 2018, legal sports betting was confined to Nevada and tribal casinos. Today, legal sports betting operates in 39 states and Washington, D.C., with more statehouses considering laws that would greenlight it."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation; The Guardian, October 4, 2025

Interview by  , The Guardian; How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation


[Kip Currier: One of the most insightful, nuanced, enlightening pieces I've read amongst thousands this year. I've followed and admired the work and wisdom of Maria Ressa and Yuval Noah Harari but wasn't familiar with UK academic and politician Rory Stewart who makes interesting contributions to this joint interview. They all individually and collectively identify in clear-eyed fashion what's going on in the world today, what the stakes are, and what each of us can do to try to make some kind of positive difference.

I shared this article with others in my network and encourage you to do the same, so these beneficial, thought-provoking perspectives can be read by as many as possible.]


[Excerpt]

"What happens when an internationally bestselling historian, a Nobel peace prize-winning journalist and a former politician get together to discuss the state of the world, and where we’re heading? Yuval Noah Harari is an Israeli medieval and military historian best known for his panoramic surveys of human history, including Sapiens, Homo Deus and, most recently, Nexus: A Brief History of Information Networks from the Stone Age to AI. Maria Ressa, joint winner of the Nobel peace prize, is a Filipino and American journalist who co-founded the news website Rappler. And Rory Stewart is a British academic and former Conservative MP, writer and co-host of The Rest Is Politics podcast. Their conversation ranged over the rise of AI, the crisis in democracy and the prospect of a Trump-Putin wedding, but began by considering a question central to all of their work: how to live a good life in an increasingly fragmented and fragile world?...

YNH I think that more people need to realise that we have to do the hard work ourselves. There is a tendency to assume that we can rely on reality to do the job for us. That if there are people who talk nonsense, who support illogical policies, who ignore the facts, sooner or later, reality will wreak vengeance on them. And this is not the way that history works.

So if you want the truth, and you want reality to win, each of us has to do some of the hard work ourselves: choose one thing and focus on that and hope that other people will also do their share. That way we avoid the extremes of despair."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

, The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Wednesday, September 24, 2025

Copyright and AI: Controlling Rights and Managing Risks; Morgan Lewis, September 23, 2025

JOSHUA M. DALTON, Partner, BostonCOLLEEN GANIN, Partner, New YorkMICHAEL R. PFEUFFER, Senior Attorney, Pittsburgh, Morgan Lewis; Copyright and AI: Controlling Rights and Managing Risks

"The law on copyright and AI is still developing, with courts and policymakers testing the limits of authorship, infringement, and fair use. Companies should expect continued uncertainty and rapid change in this space."

AI Influencers: Libraries Guiding AI Use; Library Journal, September 16, 2025

 Matt Enis, Library Journal ; AI Influencers: Libraries Guiding AI Use

"In addition to the field’s collective power, libraries can have a great deal of influence locally, says R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin and cohost of LJ’s Libraries Lead podcast.

“Right now, the place where librarians and libraries could have the most impact isn’t on trying to change OpenAI or Microsoft or Google; it’s really in looking at implementation policy,” Lankes says. For example, “on the public library side, many cities and states are adopting AI policies now, as we speak,” Lankes says. “Where I am in Austin, the city has more or less said, ‘go forth and use AI,’ and that has turned into a mandate for all of the city offices, which in this case includes the Austin Public Library” (APL). 

Rather than responding to that mandate by simply deciding how the library would use AI internally, APL created a professional development program to bring its librarians up to speed with the technology so that they can offer other city offices help with ways to use it, and advice on how to use it ethically and appropriately, Lankes explains.

“Cities and counties are wrestling with AI, and this is an absolutely perfect time for libraries to be part of that conversation,” Lankes says."

AI as Intellectual Property: A Strategic Framework for the Legal Profession; JD Supra, September 18, 2025

Co-authors:James E. Malackowski and Eric T. Carnick , JD Supra; AI as Intellectual Property: A Strategic Framework for the Legal Profession

"The artificial intelligence revolution presents the legal profession with its most significant practice development opportunity since the emergence of the internet. AI spending across hardware, software, and services reached $279.22 billion in 2024 and is projected to grow at a compound annual growth rate of 35.9% through 2030, reaching $1.8 trillion.[i] AI is rapidly enabling unprecedented efficiencies, insights, and capabilities in industry. The innovations underlying these benefits are often the result of protectable intellectual property (IP) assets. The ability to raise capital and achieve higher valuations can often be traced back to such IP. According to data from Carta, startups categorized as AI companies raised approximately one-third of total venture funding in 2024. Looking only at late-stage funding (Series E+), almost half (48%) of total capital raised went to AI companies.[ii]Organizations that implement strategic AI IP management can realize significant financial benefits.

At the same time, AI-driven enhancements have introduced profound industry risks, e.g., disruption of traditional business models; job displacement and labor market reductions; ethical and responsible AI concerns; security, regulatory, and compliance challenges; and potentially, in more extreme scenarios, broad catastrophic economic consequences. Such risks are exacerbated by the tremendous pace of AI development and adoption, in some cases surpassing societal understanding and regulatory frameworks. According to McKinsey, 78% of respondents say their organizations use AI in at least one business function, up

from 72% in early 2024 and 55% a year earlier.[iii]

This duality—AI as both a catalyst and a disruptor—is now a feature of the modern global economy. There is an urgent need for legal frameworks that can protect AI innovation, facilitate the proper commercial development and deployment of AI-related IP, and navigate the risks and challenges posed by this new technology. Legal professionals who embrace AI as IP™ will benefit from this duality. Early indicators suggest significant advantages for legal practitioners who develop specialized AI as IP expertise, while traditional IP practices may face commoditization pressures."

Monday, September 22, 2025

If Anyone Builds it, Everyone Dies review – how AI could kill us all; The Guardian; September 22, 2025

 , The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all

"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.

The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.

And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.

At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.

Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."

Friday, September 19, 2025

The 18th-century legal case that changed the face of music copyright law; WIPO Magazine, September 18, 2025

Eyal Brook, Partner, Head of Artificial Intelligence, S. Horowitz & Co , WIPO Magazine;The 18th-century legal case that changed the face of music copyright law

"As we stand at the threshold of the AI revolution in music creation, perhaps the most valuable lesson from this history is not any particular legal doctrine but rather the recognition that our conceptions of musical works and authorship are not fixed but evolving.

Imagine what would have happened had Berne negotiators decided to define the term in 1886. The “musical work” as a legal concept was born from Johann Christian Bach’s determination to assert his creative rights – and it continues to transform with each new technological development and artistic innovation.

The challenge for copyright law in the 21st century is to keep fulfilling copyright’s fundamental purpose: to recognize and reward human creativity in all its forms. This will require not just legal ingenuity but also a willingness to reconsider our most basic assumptions about what music is and how it comes into being.

Bach’s legacy, then, is not just the precedent that he established but the ongoing conversation he initiated – an unfinished symphony of legal thought that continues to evolve with each new technological revolution and artistic movement.

As we face the challenges of AI and whatever technologies may follow, we would do well to remember that the questions we ask today about ownership and creativity echo those first raised in a London courtroom almost 250 years ago by a composer determined to claim what he believed was rightfully his."

Thursday, September 18, 2025

AI could never replace my authors. But, without regulation, it will ruin publishing as we know it; The Guardian, September 18, 2025

, The Guardian ; AI could never replace my authors. But, without regulation, it will ruin publishing as we know it


[Kip Currier: This is a thought-provoking piece by literary agent Jonny Geller. He suggests an "artists’ rights charter for AI that protects two basic principles: permission and attribution". His charter idea conveys some aspects of the copyright area called "moral rights".

Moral rights provide copyright creators with a right of paternity (i.e. attribution) and a right of integrity. The latter can enable creators to exercise some levels of control over how their copyrighted works can be adapted. The moral right of integrity, for example, was an argument in cases involving whether black and white films (legally) could be or (ethically) should be colorized. (See Colors in Conflicts: Moral Rights and the Foreign Exploitation of Colorized U.S. Motion PicturesMoral rights are not widespread in U.S. copyright law because of tensions between the moral right of integrity and the right of free expression/free speech under the U.S. Constitution (whose September 17, 1787 birthday was yesterday). The Visual Artists Rights Act (1990) is a narrow example of moral rights under U.S. copyright law.

To Geller's proposed Artists' Rights Charter for AI I'd suggest adding the word and concept of "Responsibilities". Compelling arguments can be made for providing authors with some rights regarding use of their copyrighted works as AI training data. And, commensurately, persuasive arguments can be made that authors have certain responsibilities if they use AI at any stage of their creative processes. Authors can and ethically should be transparent about how they have used AI, if applicable, in the creation stages of their writing.

Of course, how to operationalize that as an ethical standard is another matter entirely. But just because it may be challenging to initially develop some ethical language as guidance for authors and strive to instill it as a broad standard doesn't mean it shouldn't be attempted or done.]


[Excerpt]

"The single biggest threat to the livelihood of authors and, by extension, to our culture, is not short attention spans. It is AI...

As a literary agent and CEO of one of the largest agencies in Europe, I think this is something everyone should care about – not because we fear progress, but because we want to protect it. If you take away the one thing that makes us truly human – our ability to think like humans, create stories and imagine new worlds – we will live in a diminished world.

AI that doesn’t replace the artist, or that will work with them transparently, is not all bad. An actor who is needed for reshoots on a movie may authorise use of the footage they have to complete a picture. This will save on costs, the environmental impact and time. A writer may wish to speed up their research and enhance their work by training their own models to ask the questions that a researcher would. The translation models available may enhance the range of offering of foreign books, adding to our culture.

All of this is worth discussing. But it has to be a discussion and be transparent to the end user. Up to now, work has simply been stolen and there are insufficient guardrails on the distributors, studios, publishers. As a literary agent, I have a more prosaic reason to get involved – I don’t think it is fair for someone’s work to be taken without their permission to create an inferior competitor.

What can we do? We could start with some basic principles for all to sign up to. An artists’ rights charter for AI that protects two basic principles: permission and attribution."

Tuesday, September 16, 2025

AI will make the rich unfathomably richer. Is this really what we want?; The Guardian, September 16, 2025

 , The Guardian; AI will make the rich unfathomably richer. Is this really what we want?

"Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of slop. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing psychotic breaks. Do we really need any more of this?"

Monday, September 15, 2025

Google's top AI scientist says ‘learning how to learn’ will be next generation's most needed skill; Associated Press via Pittsburgh Post-Gazette, September 12, 2025

DEREK GATOPOULOS, Associated Press via Pittsburgh Post-Gazette; Google's top AI scientist says ‘learning how to learn’ will be next generation's most needed skill

"A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as artificial intelligence transforms education and the workplace.

Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google’s DeepMind, said rapid technological change demands a new approach to learning and skill development...

Greek Prime Minister Kyriakos Mitsotakis joined Mr. Hassabis at the Athens event after discussing ways to expand AI use in government services. Mr. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.

“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical," he said. "And if they see ... obscene wealth being created within very few companies, this is a recipe for significant social unrest.”

Sunday, September 14, 2025

Preparing faith leaders to prepare others to use artificial intelligence in a faithful way; Presbyterian News Service, September 4, 2025

Mike Ferguson , Presbyterian News Service; Preparing faith leaders to prepare others to use artificial intelligence in a faithful way

"It turns out an engineer whose career included stops at Boeing and Amazon — and who happens to be a person of deep faith — has plenty to say about how faith leaders can use artificial intelligence in places of worship.

Jovonia Taylor-Hayes took to the lectern Wednesday during Faithful Futures: Guiding AI with Wisdom and Witness, which is being offered online and at Westminster Presbyterian Church in Minneapolis. The PC(USA)’s Office of Innovation is among the organizers and sponsors, which also includes The Episcopal Church, the United Methodist Church and the Evangelical Lutheran Church in America.

Think of all the varied ways everyday people use AI, Taylor-Hayes said, including as an aid to streamline grocery shopping and resume building; by medical teams for note-taking; for virtual meetings and closed-captioning, which is getting better, she said; and in customer service.

“The question is, what does it look like when we stop and think about what AI means to me personally? Where does your head and heart go?” she asked. One place where hers goes to is scripture, including Ephesians 2:10 and Psalm 139:14. “God has prepared us,” she said, “to do what we need to do.”

During the first of two breakout sessions, she asked small groups both in person and online to discuss questions including where AI shows up in their daily work and life and why they use AI as a tool."

Saturday, September 13, 2025

World Meeting on Human Fraternity: Disarming words to disarm the world; Vatican News, September 13, 2025

Roberto Paglialonga, Vatican News ; World Meeting on Human Fraternity: Disarming words to disarm the world


[Kip Currier: There is great wisdom and guidance in these words from Pope Leo and Fr. Enzo Fortunato (highlighted from this Vatican News article for emphasis):

Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

What is in the best interests of corporations and shareholders should not -- must not -- ever be this planet's central organizing principle.

To the contrary, that which is at the very center of our humanity -- truth, freedom, the well-being and dignity of each and every person, and prioritization of the best interests of all members of humanity -- MUST be our North Star and guiding light.]


[Excerpt]

"Representatives from the world of communication and information—directors and CEOs of international media networks— gathered in Rome for the “News G20” roundtable, coordinated by Father Enzo Fortunato, director of the magazine Piazza San Pietro. The event took place on Friday 12 September in the Sala della Protomoteca on Rome's Capitoline Hill. The participants addressed a multitude of themes, including transparency and freedom of information in times of war and conflict: the truth of facts as an essential element to “disarm words and disarm the world,” as Pope Leo XIV has said, so that storytelling and narrative may once again serve peace, dialogue, and fraternity. They also discussed the responsibility of those who work in media to promote the value of competence, in-depth reporting, and credibility in an age dominated by unchecked social media, algorithms, clickbait slogans, and rampant expressions of hatred and violence from online haters.

Three pillars of our time: truth, freedom, Dignity


In opening the workshop, Father Fortunato outlined three “pillars” that can no longer be taken for granted in our time: truth, freedom, and dignity. Truth, he said, is “too often manipulated and exploited,” and freedom is “wounded,” as in many countries around the world “journalists are silenced, persecuted, or killed.” Yet “freedom of the press should be a guarantee for citizens and a safeguard for democracy.” Today, Fr. Fortunato continued, “we have many ‘dignitaries’ but little dignity”: people are targeted by “hate and defamation campaigns, often deliberately orchestrated behind a computer screen. Words can wound more than weapons—and not infrequently, those wounds lead to extreme acts.” Precisely in a historical period marked by division and conflict, humanity—despite its diverse peoples, cultures, and opinions—is called to rediscover what unites it. “Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"

Thursday, September 11, 2025

Books by Bots: Librarians grapple with AI-generated material in collections; American Libraries, September 2, 2025

Reema Saleh  , American Libraries; Books by BotsLibrarians grapple with AI-generated material in collections

"How to Spot AI-Generated Books

Once an AI-generated book has made it to your library, it will likely give itself away with telltale signs such as jumbled, repetitive, or contradicting sentences; glaring grammatical errors or false statements; or digital art that looks too smooth around the corners.

Of course, if you can get a digital sneak-peek inside a book before ordering, all the better. But if not, how can you head off AI content so it never arrives on your desk? The following tips can help.

  • Look into who the author is and how “real” they seem, says Robin Bradford, a collection development librarian at a public library in Washington. An author with no digital footprint is a red flag, especially if they are credited with a slew of titles each year. Also a red flag: a book with no author listed at all.
  • Exercise caution regarding self-published books, small presses, or platforms such as Amazon, which filters out less AI-generated content than other vendors do.
  • Think about whether the book is capitalizing on the chance that a reader will confuse it with another, more popular book, says Jane Stimpson, a library instruction and educational technology consultant for the Massachusetts Library System. Does it have a cover similar to that of an existing bestseller? Just as animated Disney movies get imitated by low-budget knockoffs, popular titles get imitated by AI-generated books.
  • Check if there is mention of AI use in the Library of Congress record associated with the book, says Sarah Manning, a collection development librarian at Boise (Idaho) Public Library (BPL). If the book has been registered with the US Copyright Office, its record may mention AI."

Monday, September 8, 2025

Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference; Episcopal News Service (ENS), September 5, 2025

David Paulsen, Episcopal News Service (ENS) ; Faith leaders bring ethical concerns, curiosity to AI debate at multi-denominational conference

"Some of the most tech-forward minds in the Protestant church gathered here this week at the Faithful Futures conference, where participants wrestled with the ethical, practical and spiritual implications of artificial intelligence. The Episcopal Church is one of four Protestant denominations that hosted the Sept. 2-5 conference. About halfway through, one of the moderators acknowledged that AI has advanced so far and so rapidly that most conferences on AI are no longer focused just on AI...

AI raises spiritual questions over what it means to be human

Much of the conference seemed to pivot on questions that defied easy answers. In an afternoon session Sept. 3, several church leaders who attended last year’s Faithful Futures conference in Seattle, Washington, were invited to give 10-minute presentations on their preferred topics.

“What happens to theology when the appearance of intelligence is no longer uniquely human?” said the Rev. Michael DeLashmutt, a theology professor at General Theological Seminary in New York, New York, who also serves as the Episcopal seminary’s senior vice president.

DeLashmutt argued that people of faith, in an era of AI, must not forget what it means to be Christian and to be human. “Being human means being relational, embodied, justice-oriented and open to God’s spirit,” he said. “So, I think the real risk is not that machines will become human, but that we will forget the fullness of what humanity actually is.”

Kip Currier, a computing and information professor at the University of Pittsburgh, warned that AI is being used by sports betting platforms to appeal to gamblers, including those suffering from addiction. Mark Douglas, an ethics professor at Columbia Theological Seminary, outlined the ecological impact of AI data centers, which need to consume massive amounts of energy and water.

The Rev. Andy Morgan, a Presbyterian pastor based in Knoxville, Tennessee, described himself as his denomination’s “unofficial AI person” and suggested that preachers should not be afraid of using AI to improve their sermons – as long as they establish boundaries to prevent delegating too much to the technology."

Sunday, September 7, 2025

Nashville church helps unhoused people after downtown library fire; NewsChannel5, September 6, 2025

"When the Nashville Public Library's downtown branch closed after a fire, McKendree United Methodist Church stepped up to fill a critical gap for people experiencing homelessness who had lost their daily refuge.

"Alright we'll get ya all bagged up here," said Francie Markham, who volunteers at the church every Thursday morning helping people experiencing homelessness...

After losing their cool refuge with computers and resources, Smith said many people just wanted to avoid the long stretch of summer heat.

"So what we were able to do on our Tuesdays and Thursday meal is to allow them to come in much earlier rather than at the 11:30 times so they would be out of the element," Smith said.

"With the changing of the season we need it open as soon as we can," Smith said.

In the meantime, Smith and Markham keep doing what's written on the walls — serving kindness.

Despite initial reports the library would open soon after the fire, library officials say the library requires a third party inspection before it can open. The two nearest library branches, North Branch and Hadley Park, are both more than a 30-minute walk from the library downtown. 

Have you witnessed acts of community kindness during challenging times? Share your story with Kim Rafferty and help us highlight the helpers making a difference in Middle Tennessee. Email kim.rafferty@NewsChannel5.com to continue the conversation.

In this article, we used artificial intelligence to help us convert a video news report originally written by Kim Rafferty. When using this tool, both Kim and the NewsChannel 5 editorial team verified all the facts in the article to make sure it is fair and accurate before we published it. We care about your trust in us and where you get your news, and using this tool allows us to convert our news coverage into different formats so we can quickly reach you where you like to consume information. It also lets our journalists spend more time looking into your story ideas, listening to you and digging into the stories that matter."