Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, February 20, 2025

AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead; Electronic Frontier Foundation (EFF), February 19, 2025

 TORI NOBLE, Electronic Frontier Foundation (EFF); AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead


[Kip Currier: No, not everyone. Not requiring Big Tech to figure out a way to fairly license or get permission to use the copyrighted works of creators unjustly advantages these deep pocketed corporations. It also inequitably disadvantages the economic and creative interests of the human beings who labor to create copyrightable content -- authors, songwriters, visual artists, and many others.

The tell is that many of these same Big Tech companies are only too willing to file copyright infringement lawsuits against anyone whom they allege is infringing their AI content to create competing products and services.]


[Excerpt]


"Threats to Socially Valuable Research and Innovation 

Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.  

Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Thursday, January 30, 2025

Vatican says AI has 'shadow of evil,' calls for close oversight; Reuters, January 28, 2025

, Reuters ; Vatican says AI has 'shadow of evil,' calls for close oversight

"The Vatican on Tuesday called for governments to keep a close eye on the development of artificial intelligence, warning the technology contained "the shadow of evil" in its ability to spread misinformation.

"AI generated fake media can gradually undermine the foundations of society," said a new text on the ethics of AI, written by two Vatican departments and approved by Pope Francis.

"This issue requires careful regulation, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fuelling political polarization and social unrest," it said."

Wednesday, January 29, 2025

Copyright Office Releases Part 2 of Artificial Intelligence Report; U.S. Copyright Office, Issue No. 1060, January 29, 2025

  U.S. Copyright Office, Issue No. 1060Copyright Office Releases Part 2 of Artificial Intelligence Report

"Today, the U.S. Copyright Office is releasing Part 2 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI). This Part of the Report addresses the copyrightability of outputs created using generative AI. The Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts. The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.

“After considering the extensive public comments and the current state of technological development, our conclusions turn on the centrality of human creativity to copyright,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and AI. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed more than 10,000 responsive comments, which served to inform these conclusions.

The Report is being released in three Parts. Part 1 was published on July 31, 2024, and recommended federal legislation to respond to the unauthorized distribution of digital replicas that realistically but falsely depict an individual. The final, forthcoming Part 3 will address the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.

As announced last year, the Office also plans to supplement its March 2023 registration guidance and update the relevant sections of the Compendium of U.S. Copyright Office Practices.

For more information about the Copyright Office’s AI Initiative, please visit the website."

Tuesday, January 28, 2025

Former OpenAI safety researcher brands pace of AI development ‘terrifying’; The Guardian, January 28, 2025

 Global technology editor, The Guardian ; Former OpenAI safety researcher brands pace of AI development ‘terrifying’

"A former safety researcher at OpenAI says he is “pretty terrified” about the pace of development in artificial intelligence, warning the industry is taking a “very risky gamble” on the technology.

Steven Adler expressed concerns about companies seeking to rapidly develop artificial general intelligence (AGI), a theoretical term referring to systems that match or exceed humans at any intellectual task."

Saturday, January 25, 2025

Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators; Virginie Berger, January 25, 2025

 Virginie Berger

, Forbes; Copyright Under Siege: How Big Tech Uses AI And China To Exploit Creators

"Generative AI is reshaping creativity in ways that highlight a troubling paradox: while touted as a force for innovation, it increasingly relies on exploiting copyrighted materials, songs, books, and artworks, without consent or compensation. This transformation underscores the growing conflict between technological progress and the preservation of artistic integrity. At the heart of the issue lies a troubling paradox: while companies like OpenAI and Google promote AI as a force for innovation, their reliance on scraping copyrighted materials, songs, books, and artworks, undermines the very creativity they claim to enhance. This exploitation is often disguised as progress or justified as necessary for global competitiveness, particularly in the AI race against China. However, these claims mask a deeper reality: the consolidation of power by Big Tech at the expense of creators. As the balance of influence shifts, those who drive culture and innovation are increasingly marginalized, raising urgent questions about the future of intellectual property and creative industries."

Friday, January 24, 2025

Eternal You’ and the Ethics of Using A.I. to ‘Talk’ to Dead Loved Ones; The New York Times, January 24, 2025

, The New York Times ; Eternal You’ and the Ethics of Using A.I. to ‘Talk’ to Dead Loved Ones

"As the title suggests, “Eternal You” is mostly concerned with a very particular use of A.I.: giving users the illusion of talking to their dead loved ones. Large language models trained on the deceased’s speech patterns, chat logs and more can be made to imitate that person’s way of communicating so well that it feels to the grief-stricken as if they’re crossing the border between life and death. Those tools can be comforting, but they’re also potentially big business. One of the film’s subjects calls it “death capitalism.”"

Monday, January 20, 2025

Is the law playing catch-up with AI?; Harvard Law Today, January 16, 2025

 Harvard Law Today; Is the law playing catch-up with AI?

"Harvard Law Today: Why was the Artificial Intelligence and Intellectual Property Law Conference in November convened? Why is it important to be talking about AI and IP right now?

William Lee: In the past, this event has been much more focused on the specifics of the law and comparisons of the different approaches across jurisdictions. This year, the conference addressed AI more generally with moderators and panelists from a wider variety of fields including homeland security, life sciences, technological development, non-profit advocacy, and even ethics. I think it was an introduction into AI for many of the people in the room and who better to provide that introduction than [Harvard Law School Professor] Jonathan Zittrain ’95. Matt Ferraro, senior counselor for cybersecurity and emerging technology to the secretary of Homeland Security and executive director of the Artificial Intelligence Safety and Security Board, led a panel primarily of industry leaders, explaining the capabilities and trajectory of AI technology. Then, Iain Cunningham from NVIDIA chaired an excellent panel mostly composed of academics and people from industry discussing how IP law and AI interact. We also had input from experts on the AI and IP relationship in jurisdictions across the globe, including Europe, the UK, and Africa, on a panel moderated by Terry Fisher that was particularly illuminating. Then, we closed with a judges panel where a group of five Federal Circuit and two District Court judges offered views on AI issues as well as IP more broadly.

Louis Tompros: IP law has historically, and inherently, operated at the intersection of law and fast-moving technology. Artificial Intelligence is currently where technology is moving the fastest and where the law has the most ground to cover in order to keep pace. This conference was designed to educate folks about AI technology and the various IP law approaches taken in the United States and around the world, and to help forecast how protections for creative and useful ideas will function in the context of these innovative systems. We try to make the IP conference as broadly appealing and relevant to the groups of constituents that are interested in participating, that is, people within the legal community, the business community, and the academic community, including Harvard Law School students. This year was the first time ever that the conference was fully subscribed via pre-registration which is, I think, a great testament to the level and breadth of interest. You can tell that we got it right precisely because of the incredible interest in this year’s event.

HLT: Throughout history, innovations have compelled IP law to adjust and evolve to account for new technology, like the radio, the television, and the internet. Is AI different?

Tompros: The law can’t possibly stay ahead. It will always lag a bit behind. Honestly, that’s part of the fun of IP law because the law is perpetually having to evolve by necessity to keep pace with rapidly evolving aspects of technology. I don’t think AI is different in kind from previous technological revolutions that affected the law, but I do think it is quite possibly different in scale. The pace of the development of the technology here is so accelerated that the speed at which technological advances are coming makes it even harder for the already trailing legal system to catch up. That leads to some interesting possibilities, but it also leads to some serious challenges. Ultimately, it demands creative and innovative thinking in the design of legal structures established to try to manage it."

Sunday, January 19, 2025

Congress Must Change Copyright Law for AI | Opinion; Newsweek, January 16, 2025

  Assistant Professor of Business Law, Georgia College and State University , Newsweek; Congress Must Change Copyright Law for AI | Opinion

"Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.

When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.

Consider what's at risk with inaction...

While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.

The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.

The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.

Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act."

Thursday, January 16, 2025

The Washington Post’s New Mission: Reach ‘All of America’; The New York Times, January 16, 2025

, The New York Times ; The Washington Post’s New Mission: Reach ‘All of America’


[Kip Currier: Two things only the people anxiously desire — bread and circuses.” 

-- Juvenal, Roman satirical poet (c. 100 AD).


To think that The Washington Post was the newspaper whose investigative reporters Bob Woodward and Carl Bernstein exposed the 1970's Watergate break-in and cover-up, resulting in the eventual resignation of Pres. Richard Nixon on August 8, 1974...

And to now see its stature intentionally diminished and its mission incrementally debased, week by week, at the hands of billionaire Jeff Bezos and hand-picked former newspaper administrators who worked for billionaire Rupert Murdoch-owned U.K. newspapers.]


[Excerpt]

"After Donald J. Trump entered the White House in 2017, The Washington Post adopted a slogan that underscored the newspaper’s traditional role as a government watchdog: “Democracy Dies in Darkness.”

This week, as Mr. Trump prepares to re-enter the White House, the newspaper debuted a mission statement that evokes a more expansive view of The Post’s journalism, without death or darkness: “Riveting Storytelling for All of America.”...

The slide deck that Ms. Watford presented describes artificial intelligence as a key enabler of The Post’s success, the people said. It describes The Post as “an A.I.-fueled platform for news” that delivers “vital news, ideas and insights for all Americans where, how and when they want it.” It also lays out three pillars of The Post’s overall plan: “great journalism,” “happy customers” and “make money.” The Post lost roughly $77 million in 2023.

But many aspects of The Post’s new mission have nothing to do with emerging technology. The slide deck includes a list of seven principles first articulated by Eugene Meyer, an influential Post owner, in 1935. Among them: “the newspaper shall tell all the truth” and “the newspaper’s duty is to its readers and to the public at large, and not to the private interests of its owners.”"

Biden bids farewell with dark warning for America: the oligarchs are coming; The Guardian, January 15, 2025

 in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming

"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.

“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.

The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.

The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.

Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”

In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”

Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."

Wednesday, January 15, 2025

Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'; Bloomberg Law, January 14, 2026

 

, Bloomberg Law; Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent'

"California attorney Mark Lemley dropped Meta Platforms Inc. as a client in a high-profile copyright case because of CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness,” the Stanford University professor said on LinkedIn."

Tuesday, January 14, 2025

USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation; United States Patent and Trademark Office (USPTO), January 14, 2025

United States Patent and Trademark Office (USPTO) ; USPTO announces new Artificial Intelligence Strategy to empower responsible implementation of innovation

"AI Strategy outlines how the USPTO will address AI's impact across IP policy, agency operations, and the broader innovation ecosystem  

WASHINGTON—The U.S. Patent and Trademark Office (USPTO) announced a new Artificial Intelligence (AI) Strategy to guide the agency’s efforts toward fulfilling the potential of AI within USPTO operations and across the intellectual property (IP) ecosystem. The Strategy offers a vision for how the USPTO can foster responsible and inclusive AI innovation, harness AI to support the agency’s mission, and advance a positive future for AI to ensure that the country maintains its leadership in innovation. 

“We have a responsibility to promote, empower, and protect innovation,” said Derrick Brent, Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the USPTO. “Developing a strategy to unleash the power of AI while mitigating risks provides a framework to advance innovation and intellectual property.”  

The strategy aims to achieve the USPTO’s AI vision and mission through five focus areas which include: 

  1. Advance the development of IP policies that promote inclusive AI innovation and creativity. 
  2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development. 
  3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
  4. Develop AI expertise within the USPTO’s workforce.
  5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.

The USPTO and our sister agencies within the Department of Commerce, as well as the U.S. Copyright Office, are providing critical guidance and recommendations to advance AI-driven innovation and creativity. In 2022, the USPTO created the AI and Emerging Technology (ET) Partnership, which has worked closely with the AI/ET community to gather public feedback through a series of sessions on topics related to AI and innovation, biotech, and intellectual property (IP) policy. Since its 2022 launch, more than 6,000 stakeholders have engaged with us on these critical issues. In additionthe USPTO collaborates across government to advance American leadership in AI by promoting innovation and competition as set forth in the Biden-Harris Administration’s landmark October 2023 AI Executive Order. 

The full text of the AI Strategy can be found on the AI Strategy webpageAdditionalinformation on AI, including USPTO guidance and more on USPTO’s AI/ET Partnership, can be found on our AI webpage. "

Monday, January 6, 2025

At the Intersection of A.I. and Spirituality; The New York Times, January 3, 2025

, The New York Times; At the Intersection of A.I. and Spirituality

"For centuries, new technologies have changed the ways people worship, from the radio in the 1920s to television sets in the 1950s and the internet in the 1990s. Some proponents of A.I. in religious spaces have gone back even further, comparing A.I.’s potential — and fears of it — to the invention of the printing press in the 15th century.

Religious leaders have used A.I. to translate their livestreamed sermons into different languages in real time, blasting them out to international audiences. Others have compared chatbots trained on tens of thousands of pages of Scripture to a fleet of newly trained seminary students, able to pull excerpts about certain topics nearly instantaneously.

But the ethical questions around using generative A.I. for religious tasks have become more complicated as the technology has improved, religious leaders say. While most agree that using A.I. for tasks like research or marketing is acceptable, other uses for the technology, like sermon writing, are seen by some as a step too far."

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

 Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Tuesday, December 31, 2024

On roads teeming with robotaxis, crossing the street can be harrowing; The Washington Post, December 30, 2024

 , The Washington Post; On roads teeming with robotaxis, crossing the street can be harrowing

"When I try to cross my street at a marked crosswalk, the Waymo robotaxis often wouldn’t yield to me. I would step out into the white-striped pavement, look at the Waymo, wait to see whether it’s going to stop — and the car would zip right past.

It cut me off again and again on the path I use to get to work and take my kids to the park. It happened even when I was stuck in a small median halfway across the road. So I began using my phone to film myself crossing. I documented more than a dozen Waymo cars failing to yield in the span of a week. (You can watch some of my recordings below.)

It is a cautionary tale about how AI, intended to make us more safe, also needs to learn how to coexist with us. The experience has taught my family that the safest place around an autonomous vehicle is inside it, not walking around it...

What’s more, how does an AI designed to follow the law learn how to break it?...

 showed my videos to outside experts, too. Phil Koopman, a Carnegie Mellon University professor who conducts research on autonomous-vehicle safety, said Waymo had no excuse not to stop."

Sunday, December 29, 2024

Saturday, December 28, 2024

Overcoming AI’s Nagging Trust And Ethics Issues; Forbes, December 28, 2024

Joe McKendrick, Forbes ; Overcoming AI’s Nagging Trust And Ethics Issues

"Trust and ethics in AI is what is making business leaders nervous. For example, at least 72% of executives responding to a recent surveyfrom the IBM Institute for Business Value say they “are willing to forgo generative AI benefits due to ethical concerns.” In addition, more than half (56%) indicate they are delaying major investments in generative AI until there is clarity on AI standards and regulations...

"Today, guardrails are a growing area of practice for the AI community given the stochastic nature of these models,” said Ross. “Guardrails can be employed for virtually any area of decisioning, from examining bias to preventing the leakage of sensitive data."...

The situation is not likely to change soon, Jeremy Rambarran, professor at Touro University Graduate School, pointed out. “Although the output that's being generated may be unique, depending on how the output is being presented, there's always a chance that part of the results may not be entirely accurate. This will eventually change down the road as algorithms are enhanced and could eventually be updated in an automated manner.”...

How can AI be best directed to be ethical and trustworthy? Compliance requirements, of course, will be a major driver of AI trust in the future, said Rambarran. “We need to ensure that AI-driven processes comply with ethical guidelines, legal regulations, and industry standards. Humans should be aware of the ethical implications of AI decisions and be ready to intervene when ethical concerns arise.”

Friday, December 27, 2024

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review; Electronic Frontier Foundation (EFF), December 27, 2024

 CORYNNE MCSHERRY, Electronic Frontier Foundation (EFF) ; While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

"These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction.  

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation."

‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years; The Guardian, December 27, 2024

, The Guardian; ‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years

"The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades...

Hinton is one of the three “godfathers of AI” who have won the ACM AM Turing award – the computer science equivalent of the Nobel prize – for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has played down the existential threat and has said AI “could actually save humanity from extinction”."