Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, June 27, 2025

No One Is in Charge at the US Copyright Office; Wired, June 27, 2025

  , WIRED; 

"It’s a tumultuous time for copyright in the United States, with dozens of potentially economy-shaking AI copyright lawsuits winding through the courts. It’s also the most turbulent moment in the US Copyright Office’s history. Described as “sleepy” in the past, the Copyright Office has taken on new prominence during the AI boom, issuing key rulings about AI and copyright. It also hasn’t had a leader in more than a month...

As the legality of the ouster is debated, the reality within the office is this: There’s effectively nobody in charge. And without a leader actually showing up at work, the Copyright Office is not totally business-as-usual; in fact, there’s debate over whether the copyright certificates it’s issuing could be challenged."

Thursday, June 19, 2025

Georgetown Hires New University Librarian and Dean of the Library; Georgetown University, June18, 2025

 Georgetown University; Georgetown Hires New University Librarian and Dean of the Library

 "Georgetown has appointed Alexia Hudson-Ward, a leader in university library systems, as the new university librarian and dean of the library.

Hudson-Ward will begin her role on Aug. 30, 2025, following the departure of Library Dean Harriette Hemmasi, who is retiring after seven years at Georgetown.

Hudson-Ward is the associate director of research, learning and strategic partnerships at the Massachusetts Institute of Technology (MIT) Libraries. 

As dean, she will serve as the chief administrative officer for the Georgetown University Library, which holds 3.5 million volumes and extensive collections and offers research and information services for students and faculty. 

Hudson-Ward will oversee the university libraries, which include the Joseph Mark Lauinger Memorial Library, Blommer Science Library, the Capitol Campus Library and the historic Riggs Memorial Library...

Hudson-Ward earned her master of library and information science from the University of Pittsburgh, where she pursued both academic and corporate librarianship tracks, and her doctorate in library and information science from Simmons University.

In the years since, she served as a tenured associate librarian at Pennsylvania State University and the director of libraries for Oberlin College — the first person of color to lead in the library’s 192-year history. In this role, she oversaw four libraries, a $7.2 million budget, and the renaming of the Main Library after Oberlin alumna Mary Church Terrell, who was one of the first Black women to earn a college degree and the co-founder of the NAACP.

She joined MIT in 2020, where she leads research and learning services for MIT’s library; partnerships with more than 40 MIT departments, labs, centers and institutes; and the library’s AI strategy — work she’s eager to continue at Georgetown. Her latest book project, Social Intelligence in the Age of AI, will be published by ALA Editions later this year."

Sunday, June 15, 2025

NEVER SAY GOODBYE; The New York Times, June 13, 2025

Susan Dominus ; Videos by Singeli Agnew, The New York Times; NEVER SAY GOODBYE
"StoryFile frequently works with foundations and museums, but it has already made interactive videos for several individual clients. In the future, the company intends to release a generative-A.I. app in which customers can create avatars that answer questions not provided in advance, by uploading a person’s emails, social media posts and other background material.
Matt and Joan preferred what they signed up for, which would be an avatar of Peter who answered only the questions that were posed while he was alive. Everything he said, they would know, was something he believed to be true, rather than an extrapolation. “It won’t change the reality that I’ve lost my father,” Matt said. “But it lessens the blow ever so slightly, knowing that when he does die, it won’t be the last time I’ll ever have a conversation with him.”...
Matt felt a tension between being moved by how real the experience felt yet also being reminded that it was a rendering. ...“It was a reminder that this is a human I love that I want to console. But you can’t console a video clip.”

AI BOOST OR BUST?' Syracuse University, June 10, 2025

Jay Cox, Syracuse University; AI BOOST OR BUST?

 "Himmelreich is focused on AI’s use in government and its role in decision-making, as well as AI regulations and policy. He’s co-editor of The Oxford Handbook of AI Governance (Oxford University Press, 2024), which examines the challenges and opportunities at the intersection of AI and governance. His interest in politics, social justice, computation and the digital economy often spark his interdisciplinary approach to research. He’s delved into such topics as killer robots and self-driving cars. With the support of a two-year grant from the National Endowment for the Humanities, Himmelreich is working on a book about the philosophy and ethics of data science and good decision-making. “If data science is about supporting decision-making, then you want to make sure the decisions are fair and don’t harm people,” he says."

Friday, June 13, 2025

Vatican roundtable: Pope Leo XIV, AI ethics and sexual abuse crisis reforms; America Magazine, June 12, 2025

Inside the Vatican, America Magazine; Vatican roundtable: Pope Leo XIV, AI ethics and sexual abuse crisis reforms

 "Pope Leo and AI

What early signals has Pope Leo given about artificial intelligence, deep fakes and the ethical challenges of new technologies? How might these shape the church’s approach to rapidly evolving technology environments?"

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Monday, June 9, 2025

BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use; The Hollywood Reporter, June 8, 2025

Georg Szalai, The Hollywood Reporter; BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use

"A new report published on Monday by the British Film Institute (BFI) sets out nine recommendations for the U.K. screen sector to ensure that artificial intelligence will be a boon rather than bane for film and TV. 

“AI in the Screen Sector: Perspectives and Paths Forward” analyzes current usage and experimentation with “rapidly evolving generative artificial intelligence (AI) technologies,” the BFI said. “To ensure that the U.K. remains a global leader in screen production and creative innovation, the report sets out a roadmap of key recommendations to support the delivery of ethical, sustainable, and inclusive AI integration across the sector.”"

Sunday, June 8, 2025

Booming US gambling industry a ‘highway without speed limits’, top regulator warns; The Guardian, June 8, 2025

 , The Guardian; Booming US gambling industry a ‘highway without speed limits’, top regulator warns

"Artificial intelligence is, meanwhile, transforming the gambling sector. “If operators are using technology to target bettors, that technology can be used to promote healthy behaviors,” said Maynard. “And I believe that a way that happens quicker is for regulators to get involved on the issue.”"

Tuesday, June 3, 2025

Artificial Intelligence—Promises and Perils for Humans’ Rights; Harvard Law School Human Rights Program, June 10, 2025 10:30 AM EDT

Harvard Law School Human Rights Program; Artificial Intelligence—Promises and Perils for Humans’ Rights

"In recent years, rapid advances in Artificial Intelligence (AI) technology, significantly accelerated by the development and deployment of deep learning and Large Language Models, have taken center stage in policy discussions and public consciousness. Amidst a public both intrigued and apprehensive about AI’s transformative potential across workplaces, families, and even broader political, economic, and geopolitical structures, a crucial conversation is emerging around its ethical, legal, and policy dimensions.

This webinar will convene a panel of prominent experts from diverse fields to delve into the critical implications of AI for humans and their rights. The discussion will broadly address the anticipated human rights harms stemming from AI’s increasing integration into society and explore potential responses to these challenges. A key focus will be on the role of international law and human rights law in addressing these harms, considering whether this legal framework can offer the appropriate tools for effective intervention."

5 ethical questions about artificial intelligence; Britannica Money, May 2025

 Written byFact-checked byBritannica Money; 5 ethical questions about artificial intelligence

"Are you wondering about the ethical implications of artificial intelligence? You’re not alone. AI is an innovative, powerful tool that many fear could produce significant consequences—some positive, some negative, and some downright dangerous.

Ethical concerns about an emerging technology aren’t new, but with the rise of generative AI and rapidly increasing user adoption, the conversation is taking on new urgency. Is AI fair? Does it protect our privacy? Who is accountable when AI makes a mistake—and is AI the ultimate job killer? Enterprises, individuals, and regulators are grappling with these important questions.


Let’s explore the major ethical concerns surrounding artificial intelligence and how AI designers can potentially address these problems."

Monday, June 2, 2025

USC launches $12 million Institute on Ethics & Trust in Computing; USC, May 29, 2025

 Will Kwong , USC Today; USC launches $12 million Institute on Ethics & Trust in Computing

"USC is launching the Institute on Ethics & Trust in Computing, where experts will offer ethical guidance and resources to students and researchers on the development and applications of artificial intelligence and other technologies that are now commonplace across business and finance, health care, national security and science.

The new institute is supported by $12 million in funding by the Lord Foundation of California."

Tuesday, May 27, 2025

WATCH: Is A.I. the new colonialism?; The Ink, May 27, 2025

 , The Ink; WATCH: Is A.I. the new colonialism?

"We just got off a call with the technology journalist Karen Hao, the keenest chronicler of the technology that’s promising — or threatening — to reshape the world, who has a new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

The book talks not just about artificial intelligence and what it might be, or its most visible spokesperson and what he might believe, but also about the way the tech industry titans resemble more and more the empires of old in their relentless resource extraction and exploitation of labor around the world, their take-no-prisoners competitiveness against supposedly “evil” pretenders, and their religious fervor for progress and even salvation. She also told us about what the future might look like if we get A.I. right, and the people who produce the data, the resources, and control the labor power can reassert their ownership and push back against these new empires to build a more humane and human future."

Wednesday, May 21, 2025

AI is transforming gambling: Researcher explores the ethical risks; Phys.org, May 21, 2025

 Alisha Katz, , Phys.org; AI is transforming gambling: Researcher explores the ethical risks


[Kip Currier: It's good to see the increasing use of AI in online gambling getting more attention and scrutiny. The AI chapter of my forthcoming Ethics, Information, and Technology book for Bloomsbury also examines this worrisome intersection of AI, ethics, the online gambling/gambling industry, and gamblers themselves, some of whom are particularly vulnerable to AI-assisted manipulation efforts.

Imagine an AI system that knows when a habitual online gambler tends to place bets, what games they like to play and put money on, how much and where they gamble, etc. Couple that data with easily attained demographic profile data (often freely given by users when they sign up for online access), like age, gender, occupation, income level, and place of residence. Those individual data points enable a multi-faceted marketing profile to be rendered about that gambler.

Now, consider the above scenario but the individual is a repeat online gambler who's been trying to stop gambling. They're attending Gamblers Anonymous meetings (which the AI systems likely do not know) but are being methodically targeted on their smartphones by AI systems that know exactly what to send that person to lure them back in to the gambling world if they haven't been engaging in online betting for a while. That scenario is real. 60 Minutes reported on it in 2024:

Technology has fueled a sports betting boom and a spike in problem gambling, addiction therapist warns. June 30, 2024. 60 Minutes]


[Excerpt]

"As gamers and spectators prepare for the 2025 World Series of Poker in Las Vegas on May 27, a cultural conversation around AI and ethics in gambling is brewing.

Though the gambling industry is expected to exceed $876 billion worldwide by 2026, there is a growing concern that unregulated AI systems can exploit vulnerable individuals and profit from them.

UF researcher Nasim Binesh, Ph.D., M.B.A., an assistant professor in the UF College of Health & Human Performance's Department of Tourism, Hospitality & Event Management, is exploring this concern, having published a study in the International Journal of Hospitality & Tourism Administration about identifying the risks and ethics of using AI in gambling."

We're All Copyright Owners. Why You Need to Care About AI and Copyright; CNET, May 19, 2025

Katelyn Chedraoui , CNET; We're All Copyright Owners. Why You Need to Care About AI and Copyright

"Most of us don't think about copyright very often in our daily lives. But in the age of generative AI, it has quickly become one of the most important issues in the development and outputs of chatbots and image and video generators. It's something that affects all of us because we're all copyright owners and authors...

What does all of this mean for the future?

Copyright owners are in a bit of a holding pattern for now. But beyond the legal and ethical implications, copyright in the age of AI raises important questions about the value of creative work, the cost of innovation and the ways in which we need or ought to have government intervention and protections. 

There are two distinct ways to view the US's intellectual property laws, Mammen said. The first is that these laws were enacted to encourage and reward human flourishing. The other is more economically focused; the things that we're creating have value, and we want our economy to be able to recognize that value accordingly."

Sunday, May 18, 2025

RIP American innovation; The Washington Post, May 12, 2025

  , The Washington Post; RIP American innovation

"That U.S. businesses have led the recent revolution in artificial intelligence is owed to the decades of research supported by the U.S. government in computing, neuroscience, autonomous systems, biology and beyond that far precedes those companies’ investments. Virtually the entire U.S. biotech industry — which brought us treatments for diabetes, breast cancer and HIV — has its roots in publicly funded research. Even a small boost to NIH funding has been shown to increase overall patents for biotech and pharmaceutical companies...

Giving out grants for what might look frivolous or wasteful on the surface is a feature, not a bug, of publicly funded research. Consider that Agriculture Department and NIH grants to study chemicals in wild yamsled to cortisone and medical steroids becoming widely affordable. Or that knowing more about the fruit fly has aided discoveries related to human aging, Parkinson’s disease and cancer.

For obvious reasons, companies don’t tend to invest in shared scientific knowledge that then allows lots of innovation to flourish. That would mean spending money on something that does not reap quick rewards just for that particular company.

Current business trends are more likely to help kill the U.S. innovation engine. A growing share of the country’s research and development is now being carried out by big, old companies, as opposed to start-ups and universities — and, in the process, the U.S. as a whole is spending more on R&D without getting commensurately more economic growth."

Friday, May 16, 2025

Democrats press Trump on Copyright Office chief’s removal; The Hill, May 14, 2025

 JARED GANS, The Hill ; Democrats press Trump on Copyright Office chief’s removal

"A half dozen Senate Democrats are pressing President Trump over his firing of the head of the U.S. Copyright Office, arguing that the move is illegal. 

“It threatens the longstanding independence and integrity of the Copyright Office, which plays a vital role in our economy,” the members said in the letter. “You are acting beyond your power and contrary to the intent of Congress as you seek to erode the legal and institutional independence of offices explicitly designed to operate outside the reach of partisan influence.” ...

The head of the Copyright Office is responsible for shaping federal copyright policy, and the senators argued the role is particularly crucial as the country confronts issues concerning the intersection of copyright law and technologies like artificial intelligence."

Monday, May 12, 2025

US Copyright Office found AI companies sometimes breach copyright. Next day its boss was fired; The Register, May 12, 2025

 Simon Sherwood, The Register; US Copyright Office found AI companies sometimes breach copyright. Next day its boss was fired

"The head of the US Copyright Office has reportedly been fired, the day after agency concluded that builders of AI models use of copyrighted material went beyond existing doctrines of fair use.

The office’s opinion on fair use came in a draft of the third part of its report on copyright and artificial intelligence. The first part considered digital replicas and the second tackled whether it is possible to copyright the output of generative AI.

The office published the draft [PDF] of Part 3, which addresses the use of copyrighted works in the development of generative AI systems, on May 9th.

The draft notes that generative AI systems “draw on massive troves of data, including copyrighted works” and asks: “Do any of the acts involved require the copyright owners’ consent or compensation?”"

Sunday, May 11, 2025

Trump fires Copyright Office director after report raises questions about AI training; TechCrunch, May 11, 2025

 Anthony Ha , TechCrunch; Trump fires Copyright Office director after report raises questions about AI training

"As for how this ties into Musk (a Trump ally) and AI, Morelle linked to a pre-publication version of a U.S. Copyright Office report released this week that focuses on copyright and artificial intelligence. (In fact, it’s actually part three of a longer report.)

In it, the Copyright Office says that while it’s “not possible to prejudge” the outcome of individual cases, there are limitations on how much AI companies can count on “fair use” as a defense when they train their models on copyrighted content. For example, the report says research and analysis would probably be allowed.

“But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” it continues.

The Copyright Office goes on to suggest that government intervention “would be premature at this time,” but it expresses hope that “licensing markets” where AI companies pay copyright holders for access to their content “should continue to develop,” adding that “alternative approaches such as extended collective licensing should be considered to address any market failure.”

AI companies including OpenAI currently face a number of lawsuits accusing them of copyright infringement, and OpenAI has also called for the U.S. government to codify a copyright strategy that gives AI companies leeway through fair use.

Musk, meanwhile, is both a co-founder of OpenAI and of a competing startup, xAI (which is merging with the former Twitter). He recently expressed support for Square founder Jack Dorsey’s call to “delete all IP law.”"

Copyright and Artificial Intelligence Part 3: Generative AI Training, Pre-Publication; U.S. Copyright Office, May 2025

 U.S. Copyright Office; Copyright and Artificial Intelligence Part 3: Generative AI Training, Pre-Publication

Thursday, May 1, 2025

AI's Wrestling Match with the Law; AI and Faith, April 25, 2025

David Brenner, AI and Faith; AI's Wrestling Match with the Law

"Everyday we humans wrestle with three questions: what do I want to do, What should I do, And what must I do. This constant questioning is what it means to have agency in a world where we desire many things, instinctually feel some actions are right and wrong, and frequently encounter constraints on our behavior we call “the law”.

Amidst this welter of human agency, a new participant has arrived – AI. AI is expanding our choices for desire, and confusing (and sometimes assisting) our sense of right and wrong. Here we consider the impact of AI on the “must do” question of the law."