Friday, February 16, 2024

From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education; Cleveland.com, February 15, 2024

 ; From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education

"The state released a guide Thursday to help schools and parents navigate generative artificial intelligence in an ethical manner.

“When you use the term AI, I know in some people’s minds, it can sound scary,” said Lt. Jon Husted, whose InnovateOhio office worked with private sector organizations to develop the guide...

Every technology that’s come into society has been like that.”...

But AI is the wave of the future, and Husted said it’s important that students are exposed to it.

The AI toolkit is not mandatory but can be used as a resource for educators and families.

It doesn’t include many prescriptive actions for how to begin teaching and using AI. Rather, it contains sections for parents, teachers and school districts where they can find dozens of sample lessons and discussions about ethics, how to develop policies to keep students safe, and other topics.

For instance, teachers can find a template letter that they can send to school district officials to communicate how they’re using AI...

“Before you use AI in the classroom you will need a plan for a student with privacy, data security, ethics and many other things,” Husted said. “More is needed than just a fun tool in the classroom.”"

Thursday, February 15, 2024

NIST Researchers Suggest Historical Precedent for Ethical AI Research; NIST, February 15, 2024

NIST ; NIST Researchers Suggest Historical Precedent for Ethical AI Research

"If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles? 

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine , a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI. 

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”

The Belmont Report arose from an effort to respond to unethical research studies, such as the Tuskegee syphilis study, involving human subjects. In 1974, the U.S. created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and it identified the basic ethical principles for protecting people in research studies. A U.S. federal regulation later codified these principles in 1991’s Common Rule, which requires that researchers get informed consent from research participants. Adopted by many federal departments and agencies, the Common Rule was revised in 2017 to take into account changes and developments in research."

Monday, February 12, 2024

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions; United States Patent and Trademark Office (USPTO), February 12, 2024

 Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO, United States Patent and Trademark Office (USPTO) ; Director's Blog: the latest from USPTO leadership

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions

"Today, based on the exceptional public feedback we’ve received, we announced our Inventorship Guidance for AI-Assisted Inventions in the Federal Register – the first of these directives. The guidance, which is effective on February 13, 2024, provides instructions to examiners and stakeholders on how to determine whether the human contribution to an innovation is significant enough to qualify for a patent when AI also contributed. The guidance embraces the use of AI in innovation and provides that AI-assisted inventions are not categorically unpatentable. The guidance instructs examiners on how to determine the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. Additionally, we’ve posted specific examples of hypothetical situations and how the guidance would apply to those situations to further assist our examiners and applicants in their understanding."

Inventorship guidance for AI-assisted inventions webinar; United States Patent and Trademark Office (USPTO), March 5, 2024 1 PM - 2 PM ET

 United States Patent and Trademark Office (USPTO) ; Inventorship guidance for AI-assisted inventions webinar

"The United States Patent and Trademark Office (USPTO) plays an important role in incentivizing and protecting innovation, including innovation enabled by artificial intelligence (AI), to ensure continued U.S. leadership in AI and other emerging technologies (ET).

The USPTO announced Inventorship Guidance for AI-Assisted Inventions in the Federal RegisterThis guidance is pursuant to President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) with provisions addressing IP equities. The guidance, which is effective on February 13, 2024, provides instructions to USPTO personnel and stakeholders on determining the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. 

The USPTO will host a webinar on Inventorship Guidance for AI-Assisted Inventions on Tuesday, March 5, from 1-2 p.m. EST. USPTO personnel will provide an overview of the guidance and answer stakeholder questions relating to the guidance.

This event is free and open to the public, but virtual space is limited, so please register early."

University Librarians See Urgent Need for AI Ethics; Inside Higher Ed, January 17, 2024

  Lauren Coffey, Inside Higher Ed; University Librarians See Urgent Need for AI Ethics

"Nearly three-quarters of university librarians say there’s an urgent need to address artificial intelligence’s ethical and privacy concerns, a survey finds.

Roughly half the librarians surveyed said they had a “moderate” understanding of AI concepts and principles, according to the study released Friday. About one in five said they had a slightly below moderate understanding, and roughly the same amount had a slightly above moderate understanding. Only 3 percent of respondents said they had a “very high” understanding.

The study, conducted in May 2023 by Leo Lo, president-elect of the Association of College and Research Libraries, had 605 respondents who completed the survey. Of those, 45 percent worked in research institutions and 30 percent in institutions with undergraduate and graduate programming."

On Copyright, Creativity, and Compensation; Reason, February 12, 2024

 , Reason; On Copyright, Creativity, and Compensation

"Some of you may have seen the article by David Segal in the Sunday NY Times several weeks ago [available here] about a rather sordid copyright fracas in which I have been embroiled over the past few months...

What to make of all this? I am not oblivious to the irony of being confronted with this problem after having spent 30 years or so, as a lawyer and law professor, reflecting on and writing about the many mysteries of copyright policy and copyright law in the Internet Age.

Here are a few things that strike me as interesting (and possibly important) in this episode."

Using AI Responsibly; American Libraries, January 21, 2024

Diana Panuncial , American Libraries; Using AI Responsibly

"Navigating misinformation and weighing ethical and privacy issues in artificial intelligence (AI) were top of mind for the panelists at “AI and Libraries: A Discussion on the Future,” a January 21 session at the American Library Association’s 2024 LibLearnX Conference in Baltimore. Flowers was joined by Virginia Cononie, assistant librarian and coordinator of research at University of South Carolina Upstate in Spartanburg; Dray MacFarlane, cofounder of Tasio, an AI consulting company; and Juan Rubio, digital media learning program manager for Seattle Public Library (SPL). 

Rubio, who used AI to create a tool to help teens at SPL reflect on their mental health and well-being, said there is excitement behind the technology and how it can be harnessed, but there should also be efforts to educate patrons on how to use it responsibly. 

“I think ethical use of AI comes with creating ethical people,” he said, adding that SPL has been thinking about implementing guidelines for using AI. “Be very aware of your positionality [as librarians], because I think we are in a place of privilege—not necessarily of money or power, but of knowledge.”"

Friday, February 9, 2024

The Friar Who Became the Vatican’s Go-To Guy on A.I.; The New York Times; February 9, 2024

 Jason Horowitz, The New York Times; The Friar Who Became the Vatican’s Go-To Guy on A.I.

"There is a lot is going on for Father Benanti, who, as both the Vatican’s and the Italian government’s go-to artificial intelligence ethicist, spends his days thinking about the Holy Ghost and the ghosts in the machines.

In recent weeks, the ethics professor, ordained priest and self-proclaimed geek, has joined Bill Gates at a meeting with Prime Minister Giorgia Meloni, presided over a commission seeking to save Italian media from ChatGPT bylines and general A.I. oblivion, and met with Vatican officials to further Pope Francis’s aim of protecting the vulnerable from the coming technological storm."

‘My Heart Sank’: In Maine, a Challenge to a Book, and to a Town’s Self-Image; The New York Times, February 3, 2024

 Elizabeth Williamson, The New York Times; My Heart Sank’: In Maine, a Challenge to a Book, and to a Town’s Self-Image

"Mr. Boulet appealed to the American Library Association for a public letter of support, which it offers to libraries undergoing censorship efforts. “They ghosted me,” he said.

Asked about the letter, Deborah Caldwell-Stone, director of the A.L.A.’s Office for Intellectual Freedom, said Mr. Boulet’s request had generated internal debate, and delay.

“Our position on the book is, it should remain in the collection; it is beneath us to adopt the tools of the censors,” she said in an interview. “We need to support intellectual freedom in all its aspects, in order to claim that high ground.” Months after Mr. Boulet requested the letter, Ms. Caldwell-Stone saw him at a conference and apologized...

Before the controversy, “I hadn’t really given intellectual freedom as much thought as I should have,” Mr. Boulet said. His conclusion, he said, is that “intellectual freedom or the freedom of speech isn’t there just to protect ideas that we like.”"

Wednesday, February 7, 2024

EU countries strike deal on landmark AI rulebook; Politico, February 2, 2024

GIAN VOLPICELLI, Politico ; EU countries strike deal on landmark AI rulebook

"European Union member countries on Friday unanimously reached a deal on the bloc’s Artificial Intelligence Act, overcoming last-minute fears that the rulebook would stifle European innovation.

EU deputy ambassadors green-lighted the final compromise text, hashed out following lengthy negotiations between representatives of the Council, members of the European Parliament and European Commission officials...

Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote...

Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act...

A spokesperson for German Digital Minister Volker Wissing, the foremost AI Act skeptic within Germany’s coalition government, told POLITICO: "We asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.".

A statement the European Commission, circulated among EU diplomats ahead of the vote and seen by POLITICO, reveals plans to set up an “expert group” comprising  EU member countries’ authorities. The group’s function will be to “ advise and assist” the Commission in applying and implementing the AI Act...

The AI Act still needs the formal approval of the European Parliament. The text is slated to get rubber-stamped at the committee level in two weeks, with a plenary vote expected in April."

Act now on AI before it’s too late, says UNESCO’s AI lead; Fast Company, February 6, 2024

CHRIS STOKEL-WALKER, Fast Company; Act now on AI before it’s too late, says UNESCO’s AI lead

"Starting today, delegates are gathering in Slovenia at the second Global Forum on the Ethics of AI, organized by UNESCO, the United Nations’ educational, scientific, and cultural arm. The meeting is aimed at broadening the conversation around AI risks and the need to consider AI’s impacts beyond those discussed by first-world countries and business leaders.

Ahead of the conference, Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, spoke with Fast Company...

Countries want to learn from each other. Ethics have become very important. Now there’s not a single conversation I go to that is not at some point referring to ethics—which was not the case one year ago...

Tech companies have previously said they can regulate themselves. Do you think they can with AI?

Let me just ask you something: Which sector has been regulating itself in life? Give me a break."

Tuesday, February 6, 2024

‘The situation has become appalling’: fake scientific papers push research credibility to crisis point; The Guardian, February 3, 2024

 , The Guardian; ‘The situation has become appalling’: fake scientific papers push research credibility to crisis point

"Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned. Medical research is being compromised, drug development hindered and promising academic research jeopardised thanks to a global wave of sham science that is sweeping laboratories and universities.

Last year the annual number of papers retracted by research journals topped 10,000 for the first time. Most analysts believe the figure is only the tip of an iceberg of scientific fraud."

The Challenges and Benefits of Generative AI in Health Care; Harvard Business Review, January 17, 2024

Harvard Business Review, Azeem Azhar's Exponential View Season 6, Episode 58; The Challenges and Benefits of Generative AI in Health Care

"Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.

Generative AI has a lot to offer health care professionals and medical scientists. This week, Azeem speaks with renowned cardiologist, scientist, and author Eric Topol about the change he’s observed among his colleagues in the last two years, as generative AI developments have accelerated in medicine.

They discuss:

  • The challenges and benefits of AI in health care.
  • The pros and cons of different open-source and closed-source models for health care use.
  • The medical technology that has been even more transformative than AI in the past year."

Cast as Criminals, America’s Librarians Rally to Their Own Defense; The New York Times, February 3, 2024

 Elizabeth Williamson, The New York Times; Cast as Criminals, America’s Librarians Rally to Their Own Defense

"As America’s libraries have become noisy and sometimes dangerous new battlegrounds in the nation’s culture wars, librarians like Ms. Neujahr and their allies have moved from the stacks to the front lines. People who normally preside over hushed sanctuaries are now battling groups that demand the mass removal of books and seek to control library governance. Last year, more than 150 bills in 35 states aimed to restrict access to library materials, and to punish library workers who do not comply."

Friday, February 2, 2024

European Publishers Praise New EU AI Law; Publishers Weekly, February 2, 2024

Ed Nawotka, Publishers Weekly; European Publishers Praise New EU AI Law

"The Federation of European Publishers (FEP) was quick to praise the passage of new legislation by the European Union that, among its provisions, requires "general purpose AI companies" to respect copyright law and have policies in place to this effect.

FEP officials called the EU Artificial Intelligence (AI) Act, which passed on February 2, the "world’s first concrete regulation of AI," and said that the legislation seeks to "ensure the ethical and human-centric development of this technology and prevent abusive or illegal practices law, which also demands transparency about what data is being used in training the models.""

Thursday, February 1, 2024

Read On: We're Distributing 1,500 Banned Books by Black Authors in Philly This February; Visit Philadelphia, January 31, 2024

Visit Philadelphia; Read On: We're Distributing 1,500 Banned Books by Black Authors in Philly This February

"According to Penn America, more than 30 states have banned certain books by Black authors — both fiction and non-fiction — or otherwise deemed them inappropriate.

During Black History Month and beyond, Philadelphia — the birthplace of American democracy — is making these stories accessible and available to both visitors and residents.

Visit Philadelphia has launched the Little Free(dom) Library initiative in partnership with Little Free Library and the Free Library of Philadelphia, providing resources on their site to help protect everyone’s right to read. The effort encourages visitors and residents to explore Black history and engage with Black narratives by borrowing a banned book by a Black author from one of 13 locations throughout the city. Among them: the Philadelphia Museum of Art, the Betsy Ross House, Franklin Square, Eastern State Penitentiary and the Johnson House Historic Site.

The initiative is launching with a dozen titles and 1,500 books in total. The selections include:

  • The 1619 Project: A New Origin Story by Nikole Hannah-Jones
  • All American Boys by Jason Reynolds
  • All Boys Aren’t Blue by George M. Johnson
  • Beloved by Toni Morrison
  • Between the World and Me by Ta-Nehisi Coates
  • The Fire Next Time by James Baldwin
  • Ghost Boys by Jewell Parker Rhodes
  • Hood Feminism: Notes from the Women That a Movement Forgot by Mikki Kendall
  • Roll of Thunder, Hear My Cry by Mildred D. Taylor
  • Stamped: Racism, Antiracism, and You by Jason Reynolds & Ibram X. Kendi
  • Their Eyes Were Watching God by Zora Neale Hurston
  • The Undefeated by Kwame Alexander"

The economy and ethics of AI training data; Marketplace.org, January 31, 2024

Matt Levin, Marketplace.org;  The economy and ethics of AI training data

"Maybe the only industry hotter than artificial intelligence right now? AI litigation. 

Just a sampling: Writer Michael Chabon is suing Meta. Getty Images is suing Stability AI. And both The New York Times and The Authors Guild have filed separate lawsuits against OpenAI and Microsoft. 

At the heart of these cases is the allegation that tech companies illegally used copyrighted works as part of their AI training data. 

For text focused generative AI, there’s a good chance that some of that training data originated from one massive archive: Common Crawl

“Common Crawl is the copy of the internet. It’s a 17-year archive of the internet. We make this freely available to researchers, academics and companies,” said Rich Skrenta, who heads the nonprofit Common Crawl Foundation."

Wednesday, January 31, 2024

Lawyers viewed as more ethical than car salespeople and US lawmakers; ABA Journal, January 30, 2024

DEBRA CASSENS WEISS, ABA Journal ; Lawyers viewed as more ethical than car salespeople and US lawmakers

"Only 16% of Americans rate lawyers’ honesty and ethical standards as "high" or "very high," according to a Gallup poll taken in December.

The percentage has decreased since 2022, when 21% of Americans said lawyers had high or very high honesty and ethical standards, and since 2019, when the percentage was 22%, according to a Jan. 22 press release with results of Gallup’s 2023 Honesty and Ethics poll.

Lawyers did better than business executives, insurance salespeople and stockbrokers. Twelve percent of Americans viewed those occupations as having high or very high ethics and honesty. The percentage decreased to 8% for advertising practitioners, car salespeople and senators, and 6% for members of Congress."

California copyright-case leaves tattoo artists in limbo; Fox26 Houston, January 29, 2024

 , Fox26 Houston ; California copyright-case leaves tattoo artists in limbo

"Patent and Copyright expert Joh Rizvi, known at The Patent Professor, says the California case never got to the issue of whether images reproduced in tattoos are fair to use as art and expression. 

"What I find is the more interesting question is, 'Is a tattoo different? Is this free speech?'" he wonders.

Fair Use has been the subject of countless lawsuits, and Rizvi says this one leaves artists in a legal gray area, with no precedent."

Tuesday, January 30, 2024

Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark; JDSupra, January 29, 2024

Ralph Artigliere , JDSupra; Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark

"As a former Florida trial lawyer and judge who appreciates emerging technology, I admit that I had more than a little concern when The Florida Bar announced it was working on a new ethics opinion on generative AI. Generative AI promises to provide monumental advantages to lawyers in their workflow, quality of work product, productivity, and time management and more. For clients, use of generative AI by their lawyers can mean better legal services delivered faster and with greater economy. In the area of eDiscovery, generative AI promises to surpass technology assisted review in helping manage the increasingly massive amounts of data.

Generative AI is new to the greater world, and certainly to busy lawyers who are not reading every blogpost on AI. The internet and journals are afire over concerns of hallucinations, confidentiality, bias, and the like. I felt a new ethics opinion might throw a wet blanket on generative AI and discourage Florida lawyers from investigating the new technology.

Thankfully, my concerns did not become reality. The Florida Bar took a thorough look at the technology and the existing ethical guidance and law and applied existing guidelines and rules in a thorough and balanced fashion. This article briefly summarizes Opinion 24-1 and highlights some of its important features.

The Opinion

On January 19, 2024, The Florida Bar released Ethics Opinion 24-1(“Opinion 24-1”)regarding the use of generative artificial intelligence (“AI”) in the practice of law. The Florida Bar and the State Bar of California are leaders in issuing ethical guidance on this issue. Opinion 24-1 draws from a solid background of ethics opinions and guidance in Florida and around the country and provides positive as well as cautionary statements regarding the emerging technologies. Overall, the guidance is well-placed and helpful for lawyers at a time when so many are weighing the use of generative AI technology in their law practices."

Lawyers weigh strength of copyright suit filed against BigLaw firm; Rhode Island Lawyers Weekly, January 29, 2024

 Pat Murphy , Rhode Island Lawyers Weekly; Lawyers weigh strength of copyright suit filed against BigLaw firm

"Jerry Cohen, a Boston attorney who teaches IP law at Roger Williams University School of Law, called the suit “not so much a copyright case as it is a matter of professional responsibility and respect.”"

Where's the best place to find a robot cat? The library, of course; ZDNet, January 27, 2024

Chris Matyszczyk, , ZDNet; Where's the best place to find a robot cat? The library, of course

"As Oregon Public Broadcasting (OPB) reported, the library's customers are involved in a festival of adoration when it comes to these three black-and-white robot felines...

Here's Manistee County Library in Michigan with a veritable array of robotic pets. Cats, dogs and even a bird...

Let's now drift to the Hastings Public Library, also in Michigan. There, just beneath Botley the Coding Robot is: "Robotic Cat. Coming January 2024."

Now you might be wondering what the rules are for going to your local public library and taking a robot cat home with you.

Helpfully, the Reading Public Library in Massachusetts offers some guidelines...

It seems, then, that America's libraries have become homes for robot cats. They bring peace and companionship to many. And that's a good thing."

Monday, January 29, 2024

From Our Fellows – From Automation to Agency: The Future of AI Ethics Education; Center for Democracy & Technology (CDT), January 29, 2024

Ashley LeeBerkman Klein Center for Internet and Society Affiliate, Harvard University and CDT Non-Resident Fellow alum, and Victoria HsiehComputer Science Undergraduate, Stanford University; Center for Democracy & Technology (CDT) ; From Our Fellows –  From Automation to Agency: The Future of AI Ethics Education

"Disclaimer: The views expressed by CDT’s Non-Resident Fellows and any coauthors are their own and do not necessarily reflect the policy, position, or views of CDT...

AI ethics education can play a significant role in empowering students to collectively reimagine AI practices and processes, and contribute to a cultural transformation that prioritizes ethical and responsible AI."

Saturday, January 27, 2024

Library Copyright Alliance Principles for Copyright and Artificial Intelligence; Library Copyright Alliance (LCA), American Library Association (ALA), Association of Research Libraries (ARL), July 10, 2023

Library Copyright Alliance (LCA), American Library Association (ALA), Association of Research Libraries (ARL); Library Copyright Alliance Principles for Copyright and Artificial Intelligence

"The existing U.S. Copyright Act, as applied and interpreted by the Copyright Office and the courts, is fully capable at this time to address the intersection of copyright and AI without amendment.

  • Based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.

    • Because tens—if not hundreds—of millions of works are ingested to create an LLM, the contribution of any one work to the operation of the LLM is de minimis; accordingly, remuneration for ingestion is neither appropriate nor feasible.

    • Further, copyright owners can employ technical means such as the Robots Exclusion Protocol to prevent their works from being used to train AIs.

  • If an AI produces a work that is substantially similar in protected expression to a work that was ingested by the AI, that new work infringes the copyright in the original work.

• If the original work was registered prior to the infringement, the copyright owner of the original work can bring a copyright infringement action for statutory damages against the AI provider and the user who prompted the AI to produce the substantially similar work.

• Applying traditional principles of human authorship, a work that is generated by an AI might be copyrightable if the prompts provided by the user sufficiently controlled the AI such that the resulting work as a whole constituted an original work of human authorship.

AI has the potential to disrupt many professions, not just individual creators. The response to this disruption (e.g., not be treated as a means for addressing these broader societal challenges. support for worker retraining through institutions such as community colleges and public libraries) should be developed on an economy-wide basis, and copyright law should not be treated as a means for addressing these broader societal challenges.

AI also has the potential to serve as a powerful tool in the hands of artists, enabling them to express their creativity in new and efficient ways, thereby furthering the objectives of the copyright system."

Training Generative AI Models on Copyrighted Works Is Fair Use; ARL Views, January 23, 2024

 Katherine Klosek, Director of Information Policy and Federal Relations, Association of Research Libraries (ARL), and Marjory S. Blumenthal, Senior Policy Fellow, American Library Association (ALA) Office of Public Policy and Advocacy |, ARL Views; Training Generative AI Models on Copyrighted Works Is Fair Use

"In a blog post about the case, OpenAI cites the Library Copyright Alliance (LCA) position that “based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.” LCA explained this position in our submission to the US Copyright Office notice of inquiry on copyright and AI, and in the LCA Principles for Copyright and AI.

LCA is not involved in any of the AI lawsuits. But as champions of fair use, free speech, and freedom of information, libraries have a stake in maintaining the balance of copyright law so that it is not used to block or restrict access to information. We drafted the principles on AI and copyright in response to efforts to amend copyright law to require licensing schemes for generative AI that could stunt the development of this technology, and undermine its utility to researchers, students, creators, and the public. The LCA principles hold that copyright law as applied and interpreted by the Copyright Office and the courts is flexible and robust enough to address issues of copyright and AI without amendment. The LCA principles also make the careful and critical distinction between input to train an LLM, and output—which could potentially be infringing if it is substantially similar to an original expressive work.

On the question of whether ingesting copyrighted works to train LLMs is fair use, LCA points to the history of courts applying the US Copyright Act to AI."

Richard Prince to Pay Photographers Who Sued Over Copyright; The New York Times, January 26, 2024

Matt Stevens, The New York Times; Richard Prince to Pay Photographers Who Sued Over Copyright

"The artist Richard Prince agreed to pay at least $650,000 to two photographers whose images he had incorporated in his own work, ending a long-running copyright dispute that had been closely monitored by the art world...

Brian Sexton, a lawyer for Prince, said the artist wanted to protect free expression and have copyright law catch up to changing technology...

Marriott said the judgments showed that copyright law still provided meaningful protection to creators and that the internet was not a copying free-for-all.

“There is not a fair use exception to copyright law that applies to the famous and another that applies to everyone else,” he said."

Artificial Intelligence Law - Intellectual Property Protection for your voice?; JDSupra, January 22, 2024

 Steve Vondran, JDSupra ; Artificial Intelligence Law - Intellectual Property Protection for your voice?

"With the advent of AI technology capable of replicating a person's voice and utilizing it for commercial purposes, several key legal issues are likely to emerge under California's right of publicity law. The right of publicity refers to an individual's right to control and profit from their own name, image, likeness, or voice.

Determining the extent of a person's control over their own voice will likely become a contentious legal matter given the rise of AI technology. In 2024, with a mere prompt and a push of a button, a creator can generate highly accurate voice replicas, potentially allowing companies to utilize a person's voice without their explicit permission for example using a AI generated song in a video, or podcast, or using it as a voice-over for a commercial project. This sounds like fun new technology, until you realize that in states like California where a "right of publicity law" exists a persons VOICE can be a protectable asset that one can sue to protect others who wrongfully misuse their voice for commercial advertising purposes.

This blog will discuss a few new legal issues I see arising in our wonderful new digital age being fueled by the massive onset of Generative AI technology (which really just means you input prompts into an AI tool and it will generate art, text, images, music, etc."