Showing posts with label Microsoft. Show all posts
Showing posts with label Microsoft. Show all posts

Monday, October 21, 2024

Microsoft boss urges rethink of copyright laws for AI; The Times, October 21, 2024

 Katie Prescott, The Times; Microsoft boss urges rethink of copyright laws for AI

"The boss of Microsoft has called for a rethink of copyright laws so that tech giants are able to train artificial intelligence models without risk of infringing intellectual property rights.

Satya Nadella, chief executive of the technology multinational, praised Japan’s more flexible copyright laws and said that governments need to develop a new legal framework to define “fair use” of material, which allows people in certain situations to use intellectual property without permission.

Nadella, 57, said governments needed to iron out the rules. “What are the bounds for copyright, which obviously have to be protected? What’s fair use?” he said. “For any society to move forward, you need to know what is fair use.”"

Monday, September 30, 2024

OpenAI Faces Early Appeal in First AI Copyright Suit From Coders; Bloomberg Law, September 30, 2024

Isaiah Poritz , Bloomberg Law; OpenAI Faces Early Appeal in First AI Copyright Suit From Coders

"OpenAI Inc. and Microsoft Corp.‘s GitHub will head to the country’s largest federal appeals court to resolve their first copyright lawsuit from open-source programmers who claim the companies’ AI coding tool Copilot violates a decades-old digital copyright law.

Judge Jon S. Tigar granted the programmers’ request for a mid-case turn to the US Court of Appeals for the Ninth Circuit, which must determine whether OpenAI’s copying of open-source code to train its AI model without proper attribution to the programmers could be a violation of the Digital Millennium Copyright Act...

The programmers argued that Copilot fails to include authorship and licensing terms when it outputs code. Unlike other lawsuits against AI companies, the programmers didn’t allege that OpenAI and GitHub engaged in copyright infringement, which is different from a DMCA violation."

Wednesday, September 25, 2024

OpenAI, Microsoft, Amazon among first AI Pact signatories; Euronews, September 25, 2024

Cynthia Kroet, Euronews; OpenAI, Microsoft, Amazon among first AI Pact signatories

"OpenAI, Microsoft and Amazon are among 100 companies who are the first to sign up to a voluntary alliance aiming to help usher in new AI legislation, the European Commission said today (25 September)...

The Commission previously said that some 700 companies have shown interest in joining the Pact – which involves voluntary preparatory commitments to help businesses get ready for the incoming AI Act...

The Pact supports industry's voluntary commitments related to easing the uptake of AI in organisations, identifying AI systems likely to be categorised as high-risk under the rules and promoting AI literacy.

In addition to these core commitments, more than half of the signatories committed to additional pledges, including ensuring human oversight, mitigating risks, and transparently labelling certain types of AI-generated content, such as deepfakes, the Commission said...

The AI Act, the world’s first legal framework that regulates AI models according to the risk they pose, entered into force in August."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

Tuesday, July 30, 2024

An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge; The Conversation, July 23, 2024

 Lecturer in Law, University of New England , The Conversation; ; An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge


"In May, a multibillion-dollar UK-based multinational called Informa announced in a trading update that it had signed a deal with Microsoft involving “access to advanced learning content and data, and a partnership to explore AI expert applications”. Informa is the parent company of Taylor & Francis, which publishes a wide range of academic and technical books and journals, so the data in question may include the content of these books and journals.

According to reports published last week, the authors of the content do not appear to have been asked or even informed about the deal. What’s more, they say they had no opportunity to opt out of the deal, and will not see any money from it...

The types of agreements being reached between academic publishers and AI companies have sparked bigger-picture concerns for many academics. Do we want scholarly research to be reduced to content for AI knowledge mining? There are no clear answers about the ethics and morals of such practices."

Saturday, June 29, 2024

Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web; The Verge, June 28, 2024

  Sean Hollister, The Verge; Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web

"Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use. 

When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:

I think that with respect to content that’s already on the open web, the social contract of that content since the ‘90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been “freeware,” if you like, that’s been the understanding...

I am not a lawyer, but even I can tell you that the moment you create a work, it’s automatically protected by copyright in the US." 

Wednesday, May 1, 2024

Microsoft’s “responsible AI” chief worries about the open web; The Washington Post, May 1, 2024

, The Washington Post ; Microsoft’s “responsible AI” chief worries about the open web

"As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is “part of the core bargain of search,” Natasha Crampton said in an interview Monday.

Crampton, Microsoft’s chief Responsible AI officer, spoke with The Technology 202 ahead of Microsoft’s release today of its first “Responsible AI Transparency Report.” The 39-page report, which the company is billing as the first of its kind from a major tech firm, details how Microsoft plans to keep its rapidly expanding stable of AI tools from wreaking havoc. 

It makes the case that the company has closely integrated Crampton’s Responsible AI team into its development of new AI products. It also details the progress the company has made toward meeting some of the Voluntary AI Commitments that Microsoft and other tech giants signed on to in September as part of the Biden administration’s push to regulate artificial intelligence. Those include developing safety evaluation systems for its AI cloud tools, expanding its internal AI “red teams,” and allowing users to mark images as AI-generated."

Responsible AI Transparency Report; Microsoft, May 2024

 Microsoft; Responsible AI Transparency Report

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Thursday, December 28, 2023

Complaint: New York Times v. Microsoft & OpenAI, December 2023

Complaint

THE NEW YORK TIMES COMPANY Plaintiff,

v.

MICROSOFT CORPORATION, OPENAI, INC., OPENAI LP, OPENAI GP, LLC, OPENAI, LLC, OPENAI OPCO LLC, OPENAI GLOBAL LLC, OAI CORPORATION, LLC, and OPENAI HOLDINGS, LLC,

Defendants

Wednesday, December 27, 2023

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work; The New York Times, December 27, 2023

 Michael M. Grynbaum and , The New York Times; The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work

"The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times."

Thursday, October 19, 2023

AI is learning from stolen intellectual property. It needs to stop.; The Washington Post, October 19, 2023

William D. Cohan , The Washington Post; AI is learning from stolen intellectual property. It needs to stop.

"The other day someone sent me the searchable database published by Atlantic magazine of more than 191,000 e-books that have been used to train the generative AI systems being developed by Meta, Bloomberg and others. It turns out that four of my seven books are in the data set, called Books3. Whoa.

Not only did I not give permission for my books to be used to generate AI products, but I also wasn’t even consulted about it. I had no idea this was happening. Neither did my publishers, Penguin Random House (for three of the books) and Macmillan (for the other one). Neither my publishers nor I were compensated for use of my intellectual property. Books3 just scraped the content away for free, with Meta et al. profiting merrily along the way. And Books3 is just one of many pirated collections being used for this purpose...

This is wholly unacceptable behavior. Our books are copyrighted material, not free fodder for wealthy companies to use as they see fit, without permission or compensation. Many, many hours of serious research, creative angst and plain old hard work go into writing and publishing a book, and few writers are compensated like professional athletes, Hollywood actors or Wall Street investment bankers. Stealing our intellectual property hurts." 

Tuesday, March 14, 2023

Microsoft lays off team that taught employees how to make AI tools responsibly; The Verge, March 13, 2023

 ZOE SCHIFFERCASEY NEWTON, The Verge; Microsoft lays off team that taught employees how to make AI tools responsibly

"Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs."

Saturday, February 25, 2023

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot; The New York Times, February 23, 2023

  The New York Times; History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

"Microsoft’s “responsible A.I.” program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles. (Though the company says it is still adhering to all six of them.)"

Thursday, July 30, 2020

Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight; The Guardian, July 30, 2020

, The Guardian; Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight

"As David Cicilline put it: “These companies as they exist today have monopoly power. Some need to be broken up, all need to be properly regulated and held accountable.” And then he quoted Louis Brandeis, who said, “We can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can’t have both.”"

Monday, March 4, 2019

Is Ethical A.I. Even Possible?; The New York Times, March 1, 2019

Cade Metz, The New York Times; Is Ethical A.I. Even Possible?

"As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation."

Saturday, February 16, 2019

Vatican, Microsoft team up on artificial intelligence ethics; The Washington Post, February 13, 2019

Associated Press via The Washington Post; Vatican, Microsoft team up on artificial intelligence ethics

"The Vatican says it is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence.

Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life.

The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence."

Wednesday, March 28, 2018

A Needle In A Legal Haystack Could Sink A Major Supreme Court Privacy Case; NPR, March 28, 2018

Nina Totenberg, NPR; A Needle In A Legal Haystack Could Sink A Major Supreme Court Privacy Case

"The question in the case is whether a U.S. technology company can refuse to honor a court-ordered U.S. search warrant seeking information that is stored at a facility outside the United States...

...[A]s the case came to the justices, they were going to have to apply current advanced technology to the Stored Communications Act, a law enacted in 1986, several years before email even became available for wide public use.

Amazingly, just three weeks after the Supreme Court argument, lo and behold, a Congress famous for gridlock passed legislation to modernize the law...

Titled the Cloud Act (Clarifying Lawful Overseas Use of Data), the statute was attached to the 2,232 page, $1.3 trillion omnibus spending bill."

Sunday, January 21, 2018

It's time to eliminate Oregon's digital divide: Guest opinion; Oregon Live, January 19, 2018

Brant Wolf, Oregon Live; 

It's time to eliminate Oregon's digital divide: Guest opinion


"Right now, Oregon's urban areas along the I-5 corridor have better and more reliable access to high-speed broadband internet than our fellow Oregonians in more rural areas. That inequity in access causes a "digital divide." People who happen to live in big cities have multiple options for high-speed internet, while those in certain rural and remote areas have limited options.

The everyday impacts of that divide are obvious. It's more difficult to start and maintain a new business without access to internet. It's more difficult for the unemployed to find jobs. Young students who don't have access to high-speed internet fall behind their classmates who do. It can also be more difficult to gain access to quality health care through telemedicine options.

But it doesn't have to be this way."

Thursday, January 18, 2018

In new book, Microsoft cautions humanity to develop AI ethics guidelines now; GeekWire, January 17, 2018

Monica Nickelsburg, GeekWire; 

In new book, Microsoft cautions humanity to develop AI ethics guidelines now


"This dangerous scenario is one of many posited in “The Future Computed,” a new book published by Microsoft, with a foreword by Brad Smith, Microsoft president and chief legal officer, and Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research group.

The book examines the use cases and potential dangers of AI technology, which will soon be integrated into many of the systems people use everyday. Microsoft believes AI should be developed with six core principles: “fair, reliable and safe, private and secure, inclusive, transparent, and accountable.”

Nimble policymaking and strong ethical guidelines are essential to ensuring AI doesn’t threaten equity or security, Microsoft says. In other words, we need to start planning now to avoid a scenario like the one facing the imaginary tech company looking for software engineers."