Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, February 2, 2024

European Publishers Praise New EU AI Law; Publishers Weekly, February 2, 2024

Ed Nawotka, Publishers Weekly; European Publishers Praise New EU AI Law

"The Federation of European Publishers (FEP) was quick to praise the passage of new legislation by the European Union that, among its provisions, requires "general purpose AI companies" to respect copyright law and have policies in place to this effect.

FEP officials called the EU Artificial Intelligence (AI) Act, which passed on February 2, the "world’s first concrete regulation of AI," and said that the legislation seeks to "ensure the ethical and human-centric development of this technology and prevent abusive or illegal practices law, which also demands transparency about what data is being used in training the models.""

Saturday, January 27, 2024

Library Copyright Alliance Principles for Copyright and Artificial Intelligence; Library Copyright Alliance (LCA), American Library Association (ALA), Association of Research Libraries (ARL), July 10, 2023

Library Copyright Alliance (LCA), American Library Association (ALA), Association of Research Libraries (ARL); Library Copyright Alliance Principles for Copyright and Artificial Intelligence

"The existing U.S. Copyright Act, as applied and interpreted by the Copyright Office and the courts, is fully capable at this time to address the intersection of copyright and AI without amendment.

  • Based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.

    • Because tens—if not hundreds—of millions of works are ingested to create an LLM, the contribution of any one work to the operation of the LLM is de minimis; accordingly, remuneration for ingestion is neither appropriate nor feasible.

    • Further, copyright owners can employ technical means such as the Robots Exclusion Protocol to prevent their works from being used to train AIs.

  • If an AI produces a work that is substantially similar in protected expression to a work that was ingested by the AI, that new work infringes the copyright in the original work.

• If the original work was registered prior to the infringement, the copyright owner of the original work can bring a copyright infringement action for statutory damages against the AI provider and the user who prompted the AI to produce the substantially similar work.

• Applying traditional principles of human authorship, a work that is generated by an AI might be copyrightable if the prompts provided by the user sufficiently controlled the AI such that the resulting work as a whole constituted an original work of human authorship.

AI has the potential to disrupt many professions, not just individual creators. The response to this disruption (e.g., not be treated as a means for addressing these broader societal challenges. support for worker retraining through institutions such as community colleges and public libraries) should be developed on an economy-wide basis, and copyright law should not be treated as a means for addressing these broader societal challenges.

AI also has the potential to serve as a powerful tool in the hands of artists, enabling them to express their creativity in new and efficient ways, thereby furthering the objectives of the copyright system."

Training Generative AI Models on Copyrighted Works Is Fair Use; ARL Views, January 23, 2024

 Katherine Klosek, Director of Information Policy and Federal Relations, Association of Research Libraries (ARL), and Marjory S. Blumenthal, Senior Policy Fellow, American Library Association (ALA) Office of Public Policy and Advocacy |, ARL Views; Training Generative AI Models on Copyrighted Works Is Fair Use

"In a blog post about the case, OpenAI cites the Library Copyright Alliance (LCA) position that “based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.” LCA explained this position in our submission to the US Copyright Office notice of inquiry on copyright and AI, and in the LCA Principles for Copyright and AI.

LCA is not involved in any of the AI lawsuits. But as champions of fair use, free speech, and freedom of information, libraries have a stake in maintaining the balance of copyright law so that it is not used to block or restrict access to information. We drafted the principles on AI and copyright in response to efforts to amend copyright law to require licensing schemes for generative AI that could stunt the development of this technology, and undermine its utility to researchers, students, creators, and the public. The LCA principles hold that copyright law as applied and interpreted by the Copyright Office and the courts is flexible and robust enough to address issues of copyright and AI without amendment. The LCA principles also make the careful and critical distinction between input to train an LLM, and output—which could potentially be infringing if it is substantially similar to an original expressive work.

On the question of whether ingesting copyrighted works to train LLMs is fair use, LCA points to the history of courts applying the US Copyright Act to AI."

Friday, January 26, 2024

The Sleepy Copyright Office in the Middle of a High-Stakes Clash Over A.I.; The New York Times, January 25, 2024

  Cecilia Kang, The New York Times; The Sleepy Copyright Office in the Middle of a High-Stakes Clash Over A.I.

"For decades, the Copyright Office has been a small and sleepy office within the Library of Congress. Each year, the agency’s 450 employees register roughly half a million copyrights, the ownership rights for creative works, based on a two-centuries-old law.

In recent months, however, the office has suddenly found itself in the spotlight. Lobbyists for Microsoft, Google, and the music and news industries have asked to meet with Shira Perlmutter, the register of copyrights, and her staff. Thousands of artists, musicians and tech executives have written to the agency, and hundreds have asked to speak at listening sessions hosted by the office.

The attention stems from a first-of-its-kind review of copyright law that the Copyright Office is conducting in the age of artificial intelligence. The technology — which feeds off creative content — has upended traditional norms around copyright, which gives owners of books, movies and music the exclusive ability to distribute and copy their works.

The agency plans to put out three reports this year revealing its position on copyright law in relation to A.I. The reports are set to be hugely consequential, weighing heavily in courts as well as with lawmakers and regulators."

Wednesday, January 24, 2024

Is A.I. the Death of I.P.?; The New Yorker, January 15, 2024

 Louis Menand, The New Yorker ; Is A.I. the Death of I.P.?

"Intellectual property accounts for some or all of the wealth of at least half of the world’s fifty richest people, and it has been estimated to account for fifty-two per cent of the value of U.S. merchandise exports. I.P. is the new oil. Nations sitting on a lot of it are making money selling it to nations that have relatively little. It’s therefore in a country’s interest to protect the intellectual property of its businesses.

But every right is also a prohibition. My right of ownership of some piece of intellectual property bars everyone else from using that property without my consent. I.P. rights have an economic value but a social cost. Is that cost too high?

I.P. ownership comes in several legal varieties: copyrights, patents, design rights, publicity rights, and trademarks."

Wednesday, January 10, 2024

Addressing equity and ethics in artificial intelligence; American Psychological Association, January 8, 2024

 Zara Abrams, American Psychological Association; Addressing equity and ethics in artificial intelligence

"As artificial intelligence (AI) rapidly permeates our world, researchers and policymakers are scrambling to stay one step ahead. What are the potential harms of these new tools—and how can they be avoided?

“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things,” said David Luxton, PhD, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine who is part of a session at the upcoming 2024 Consumer Electronics Show (CES) on Harnessing the Power of AI Ethically.

Luxton and his colleagues dubbed recent AI advances “super-disruptive technology” because of their potential to profoundly alter society in unexpected ways. In addition to concerns about job displacement and manipulation, AI tools can cause unintended harm to individuals, relationships, and groups. Biased algorithms can promote discrimination or other forms of inaccurate decision-making that can cause systematic and potentially harmful errors; unequal access to AI can exacerbate inequality (Proceedings of the Stanford Existential Risk Conference 2023, 60–74). On the flip side, AI may also hold the potential to reduce unfairness in today’s world—if people can agree on what “fairness” means.

“There’s a lot of pushback against AI because it can promote bias, but humans have been promoting biases for a really long time,” said psychologist Rhoda Au, PhD, a professor of anatomy and neurobiology at the Boston University Chobanian & Avedisian School of Medicine who is also speaking at CES on harnessing AI ethically. “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”"

"Stories Are Just Something That Can Be Eaten by an AI": Marvel Lashes Out at AI Content with a Mind-Blowing X-Men Twist; ScreenRant, January 9, 2024

TRISTAN BENNS, ScreenRant; "Stories Are Just Something That Can Be Eaten by an AI": Marvel Lashes Out at AI Content with a Mind-Blowing X-Men Twist

"Realizing the folly of her actions, Righteous laments her weakness against Enigma as a creature of stories, saying that “Stories are just something that can be eaten by an A.I. to make it more powerful. The only good story is a story that has been entirely and totally consumed and exploited.”.

While this isn’t the mutants’ first battle against artificial intelligence, this pointed statement has some sobering real-world applications. Since the Krakoan Age began, it’s been clear mutantkind's greatest battle would be against the concept of artificial intelligence as the final evolution of “life” in the Marvel Universe. With entities like Nimrod and the Omega Sentinel steering the forces of Orchis and other enemies of the X-Men against the mutant nation, this conflict has been painted as the ultimate fight for survival for mutants. However, with Enigma’s ultimate triumph over even the power of storytelling, it is clear that the X-Men aren’t just facing a comic’s interpretation of artificial intelligence – they’re battling the death of imagination.

In this way, the X-Men’s ultimate battle parallels a very real-world problem that both fans and creators must confront: the act of true creation versus the effects of generative artificial intelligence."

Saturday, January 6, 2024

AI’s future could hinge on one thorny legal question; The Washington Post, January 4, 2024

 

, The Washington Post; AI’s future could hinge on one thorny legal question

"Because the AI cases represent new terrain in copyright law, it is not clear how judges and juries will ultimately rule, several legal experts agreed...

“Anyone who’s predicting the outcome is taking a big risk here,” Gervais said...

Cornell’s Grimmelmann said AI copyright cases might ultimately hinge on the stories each side tells about how to weigh the technology’s harms and benefits.

“Look at all the lawsuits, and they’re trying to tell stories about how these are just plagiarism machines ripping off artists,” he said. “Look at the [AI firms’ responses], and they’re trying to tell stories about all the really interesting things these AIs can do that are genuinely new and exciting.”"

Wednesday, January 3, 2024

Tuesday, January 2, 2024

Copyright law is AI's 2024 battlefield; Axios, January 2, 2023

Megan Morrone , Axios; Copyright law is AI's 2024 battlefield

"Looming fights over copyright in AI are likely to set the new technology's course in 2024 faster than legislation or regulation.

Driving the news: The New York Times filed a lawsuit against OpenAI and Microsoft on December 27, claiming their AI systems' "widescale copying" constitutes copyright infringement.

The big picture: After a year of lawsuits from creators protecting their works from getting gobbled up and repackaged by generative AI tools, the new year could see significant rulings that alter the progress of AI innovation. 

Why it matters: The copyright decisions coming down the pike — over both the use of copyrighted material in the development of AI systems and also the status of works that are created by or with the help of AI — are crucial to the technology's future and could determine winners and losers in the market."

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Monday, January 1, 2024

Roberts sidesteps Supreme Court’s ethics controversies in yearly report; The Washington Post, December 31, 2023

 , The Washington Post; Roberts sidesteps Supreme Court’s ethics controversies in yearly report

"Roberts, a history buff, also expounded on the potential for artificial intelligence to both enhance and detract from the work of judges, lawyers and litigants. For those who cannot afford a lawyer, he noted, AI could increase access to justice.

“AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as it risks invading privacy interests and dehumanizing the law,” Roberts wrote, “machines cannot fully replace key actors in court.”...

Roberts also did not mention in his 13-page report the court’s adoption for the first time of a formal code of conduct, announced in November, specific to the nine justices and intended to promote “integrity and impartiality.” For years, the justices said they voluntarily complied with the same ethical guidelines that apply to other federal judges and resisted efforts by Congress to impose a policy on the high court...

The policy was praised by some as a positive initial step, but criticized by legal ethics experts for giving the justices too much discretion over recusal decisions and for not including a process for holding the justices accountable if they violate their own rules."

Friday, December 29, 2023

Testing Ethical Boundaries. The New York Times Sues Microsoft And OpenAI On Copyright Concerns; Forbes, December 29, 2023

 Cindy Gordon, Forbes; Testing Ethical Boundaries. The New York Times Sues Microsoft And OpenAI On Copyright Concerns

"We have at least seen Apple announce an ethical approach to discussing upfront with the US Media giants their interest in partnering on AI generative AI training needs and finding new revenue sharing models.

Smart Move by Apple...

The court’s rulings here will be critical to advance ethical AI practices and guard rails on what is “fair” versus predatory.

We have too many leadership behaviors that encroach on others Intellectual Property (IP) and try to mask or muddy the authenticity of communication and sources of origination of ideas and content.

I for one will be following these cases closely and this also sends a wake -up call to all technology titans, and technology industry leaders that respect, integrity and transparency on operating practices need an ethical overhauling.

One of the important leadership behaviors is risk management and looking at all stakeholder views and appreciating the risks that can be incurred. I am keen to see how Apple approaches these dynamics to build a stronger ethical brand profile."

Sunday, December 24, 2023

AI cannot patent inventions, UK Supreme Court confirms; BBC, December 20, 2023

 BBC ; AI cannot patent inventions, UK Supreme Court confirms

"The UK Supreme Court has upheld earlier decisions in rejecting a bid to allow an artificial intelligence to be named as an inventor in a patent application.

Technologist Dr Stephen Thaler had sought to have his AI, called Dabus, recognised as the inventor of a food container and a flashing light beacon."

Monday, December 18, 2023

AI could threaten creators — but only if humans let it; The Washington Post, December 17, 2023

 , The Washington Post; AI could threaten creators — but only if humans let it

"A broader rethinking of copyright, perhaps inspired by what some AI companies are already doing, could ensure that human creators get some recompense when AI consumes their work, processes it and produces new material based on it in a manner current law doesn’t contemplate. But such a shift shouldn’t be so punishing that the AI industry has no room to grow. That way, these tools, in concert with human creators, can push the progress of science and useful arts far beyond what the Framers could have imagined."

Thursday, December 14, 2023

Senator to Pope Francis: Not so fast on AI; Politico, December 14, 2023


"Congress hasn’t done enough work on artificial intelligence regulation in the U.S. to join Pope Francis’ proposal for a global treaty to regulate the technology, Sen. Mark Warner told POLITICO. On Thursday, Francis called for a binding treaty that would ensure artificial intelligence is developed and used ethically. He said in a statement that the risks of technology lacking human values of compassion, mercy, morality and forgiveness are too great — and that failing to regulate it could “pose a risk to our survival.”

Friday, December 1, 2023

Copyright law will shape how we use generative AI; Axios, December 1, 2023

 Megan Morrone, Axios; Copyright law will shape how we use generative AI

"In the year since the release of ChatGPT, generative AI has been moving fast and breaking things — and copyright law is only beginning to catch up. 

Why it matters: From Section 230 to the Digital Millennium Copyright Act (DMCA) to domain name squatting protections, intellectual property law has shaped the internet for three decades. Now, it will shape the way we use generative AI.

Driving the news: The Biden administration's recent executive order contained no initial guidance on copyright law and AI, which means these decisions will largely be left up to the courts."

Tuesday, November 21, 2023

Patent Poetry: Judge Throws Out Most of Artists’ AI Copyright Infringement Claims; JD Supra, November 20, 2023

 Adam PhilippAEON LawJD Supra; Patent Poetry: Judge Throws Out Most of Artists’ AI Copyright Infringement Claims

"One of the plaintiffs’ theories of infringement was that the output images based on the Training Images are all infringing derivative works.

The court noted that to support that claim the output images would need to be substantially similar to the protected works. However, noted the court,

none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data.

The plaintiffs argued that there was no need to show substantial similarity when there was direct proof of copying. The judge was skeptical of that argument.

This is just one of many AI-related cases making its way through the courts, and this is just a ruling on a motion rather than an appellate court decision. Nevertheless, this line of analysis will likely be cited in other cases now pending.

Also, this case shows the importance of artists registering their works with the Copyright Office before seeking to sue for infringement."

Sunday, November 19, 2023

‘Please regulate AI:' Artists push for U.S. copyright reforms but tech industry says not so fast; AP, November 18, 2023

MATT O’BRIEN, AP; ‘Please regulate AI:' Artists push for U.S. copyright reforms but tech industry says not so fast

"Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

Perlmutter said this is what the Copyright Office is trying to help sort out.

“Certainly this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”"

Wednesday, November 15, 2023

U.S. Copyright Office Extends Deadline for Reply Comments on Artificial Intelligence Notice of Inquiry; U.S. Copyright Office, November 15, 2023

U.S. Copyright Office, Issue No. 1026U.S. Copyright Office Extends Deadline for Reply Comments on Artificial Intelligence Notice of Inquiry

"The U.S. Copyright Office is extending the deadline to submit reply comments in response to the Office’s August 30, 2023, notice of inquiry regarding artificial intelligence and copyright. The deadline will ensure that members of the public have sufficient time to prepare responses to the Office’s questions and submitted comments and that the Office can proceed on a timely basis with its inquiry of the issues identified in its notice with the benefit of a complete record.

Reply comments are now due by 11:59 p.m. eastern time on Wednesday, December 6, 2023.

The Federal Register notice announcing this extension and additional information, including instructions for submitting comments, are available on the  Artificial Intelligence Study webpage."