Showing posts with label AI companies. Show all posts
Showing posts with label AI companies. Show all posts

Sunday, August 4, 2024

Music labels' AI lawsuits create copyright puzzle for courts; Reuters, August 4, 2024

 , Reuters; Music labels' AI lawsuits create copyright puzzle for courts

"Suno and Udio pointed to past public statements defending their technology when asked for comment for this story. They filed their initial responses in court on Thursday, denying any copyright violations and arguing that the lawsuits were attempts to stifle smaller competitors. They compared the labels' protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians...

The labels' claims echo allegations by novelists, news outlets, music publishers and others in high-profile copyright lawsuits over chatbots like OpenAI's ChatGPT and Anthropic's Claude that use generative AI to create text. Those lawsuits are still pending and in their early stages.

Both sets of cases pose novel questions for the courts, including whether the law should make exceptions for AI's use of copyrighted material to create something new...

"Music copyright has always been a messy universe," said Julie Albert, an intellectual property partner at law firm Baker Botts in New York who is tracking the new cases. And even without that complication, Albert said fast-evolving AI technology is creating new uncertainty at every level of copyright law.

WHOSE FAIR USE?

The intricacies of music may matter less in the end if, as many expect, the AI cases boil down to a "fair use" defense against infringement claims - another area of U.S. copyright law filled with open questions."

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid; The Observer via The Guardian, August 3, 2024

Gary Marcus, The Observer via The Guardian; OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.


We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

 

honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

 

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

 

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate"."

Tuesday, July 30, 2024

An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge; The Conversation, July 23, 2024

 Lecturer in Law, University of New England , The Conversation; ; An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge


"In May, a multibillion-dollar UK-based multinational called Informa announced in a trading update that it had signed a deal with Microsoft involving “access to advanced learning content and data, and a partnership to explore AI expert applications”. Informa is the parent company of Taylor & Francis, which publishes a wide range of academic and technical books and journals, so the data in question may include the content of these books and journals.

According to reports published last week, the authors of the content do not appear to have been asked or even informed about the deal. What’s more, they say they had no opportunity to opt out of the deal, and will not see any money from it...

The types of agreements being reached between academic publishers and AI companies have sparked bigger-picture concerns for many academics. Do we want scholarly research to be reduced to content for AI knowledge mining? There are no clear answers about the ethics and morals of such practices."

Friday, July 19, 2024

The Media Industry’s Race To License Content For AI; Forbes, July 18, 2024

  Bill Rosenblatt, Forbes; The Media Industry’s Race To License Content For AI

"AI content licensing initiatives abound. More and more media companies have reached license agreements with AI companies individually. Several startups have formed to aggregate content into large collections for AI platforms to license in one-stop shopping arrangements known in the jargon as blanket licenses. There are now so many such startups that last month they formed a trade association—the Dataset Providers Alliance—to organize them for advocacy.

Ironically, the growing volume of all this activity could jeopardize its value for copyright owners and AI platforms alike.

It will take years before the panoply of lawsuits yield any degree of clarity in the legal rules for copyright in the AI age; we’re in the second year of what is typically a decade-long process for copyright laws to adapt to disruptive technologies. One reason for copyright owners to organize now to provide licenses for AI is that—as we’ve learned from analogous situations in the past—both courts and Congress will consider is how easy it is for the AI companies to license content properly in determining whether licensing is required."

Friday, June 28, 2024

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

, Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Wednesday, June 5, 2024

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Monday, July 17, 2023

AI learned from their work. Now they want compensation.; The Washington Post, July 16, 2023

 , The Washington Post; AI learned from their work. Now they want compensation.

"Artists say the livelihoods of millions of creative workers are at stake, especially because AI tools are already being used to replace some human-made work. Mass scraping of art, writing and movies from the web for AI training is a practice creators say they never considered or consented to.

But in public appearances and in responses to lawsuits, the AI companies have argued that the use of copyrighted works to train AI falls under fair use — a concept in copyright law that creates an exception if the material is changed in a “transformative” way."

Thousands of authors urge AI companies to stop using work without permission; Morning Edition, NPR, July 17, 2023

 , Morning Edition NPR; Thousands of authors urge AI companies to stop using work without permission

"Thousands of writers including Nora Roberts, Viet Thanh Nguyen, Michael Chabon and Margaret Atwood have signed a letter asking artificial intelligence companies like OpenAI and Meta to stop using their work without permission or compensation."

Wednesday, June 21, 2023

Ethics Teams in Tech Are Stymied by Lack of Support; Stanford University Human-Centered Artificial Intelligence (HAI), June 21, 2023

,  Stanford University Human-Centered Artificial Intelligence (HAI); Ethics Teams in Tech Are Stymied by Lack of Support

"In recent years, AI companies have been publicly chided for generating machine learning algorithms that discriminate against historically marginalized groups. To quell that criticism, many companies pledged to ensure their products are fair, transparent, and accountable, but these promises are frequently criticized as being mere “ethics washing,” says Sanna Ali, who recently received her PhD from the Stanford University Department of Communication in the School of Humanities and Sciences. “There’s a concern that these companies talk the talk but don’t walk the walk.”

To explore whether that’s the case, Ali interviewed AI ethics workers from some of the largest companies in the field. The research project, co-authored with Stanford Assistant Professor of Communication Angèle Christin, Google researcher Andrew Smart, and Stanford W.M Keck Professor and Professor of Management Science and Engineering Riitta Katila, was partially funded by a seed grant from Stanford HAI and published in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency(FAccT ’23). The study found that ethics initiatives and interventions were difficult to implement in the tech industry’s institutional environment. Specifically, Ali found, teams were largely under-resourced and under-supported by leadership, and they lacked authority to act on problems they identified."