Showing posts with label Microsoft. Show all posts
Showing posts with label Microsoft. Show all posts

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Thursday, December 28, 2023

Complaint: New York Times v. Microsoft & OpenAI, December 2023

Complaint

THE NEW YORK TIMES COMPANY Plaintiff,

v.

MICROSOFT CORPORATION, OPENAI, INC., OPENAI LP, OPENAI GP, LLC, OPENAI, LLC, OPENAI OPCO LLC, OPENAI GLOBAL LLC, OAI CORPORATION, LLC, and OPENAI HOLDINGS, LLC,

Defendants

Wednesday, December 27, 2023

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work; The New York Times, December 27, 2023

 Michael M. Grynbaum and , The New York Times; The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work

"The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times."

Thursday, October 19, 2023

AI is learning from stolen intellectual property. It needs to stop.; The Washington Post, October 19, 2023

William D. Cohan , The Washington Post; AI is learning from stolen intellectual property. It needs to stop.

"The other day someone sent me the searchable database published by Atlantic magazine of more than 191,000 e-books that have been used to train the generative AI systems being developed by Meta, Bloomberg and others. It turns out that four of my seven books are in the data set, called Books3. Whoa.

Not only did I not give permission for my books to be used to generate AI products, but I also wasn’t even consulted about it. I had no idea this was happening. Neither did my publishers, Penguin Random House (for three of the books) and Macmillan (for the other one). Neither my publishers nor I were compensated for use of my intellectual property. Books3 just scraped the content away for free, with Meta et al. profiting merrily along the way. And Books3 is just one of many pirated collections being used for this purpose...

This is wholly unacceptable behavior. Our books are copyrighted material, not free fodder for wealthy companies to use as they see fit, without permission or compensation. Many, many hours of serious research, creative angst and plain old hard work go into writing and publishing a book, and few writers are compensated like professional athletes, Hollywood actors or Wall Street investment bankers. Stealing our intellectual property hurts." 

Tuesday, March 14, 2023

Microsoft lays off team that taught employees how to make AI tools responsibly; The Verge, March 13, 2023

 ZOE SCHIFFERCASEY NEWTON, The Verge; Microsoft lays off team that taught employees how to make AI tools responsibly

"Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs."

Saturday, February 25, 2023

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot; The New York Times, February 23, 2023

  The New York Times; History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

"Microsoft’s “responsible A.I.” program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles. (Though the company says it is still adhering to all six of them.)"

Thursday, July 30, 2020

Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight; The Guardian, July 30, 2020

, The Guardian; Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight

"As David Cicilline put it: “These companies as they exist today have monopoly power. Some need to be broken up, all need to be properly regulated and held accountable.” And then he quoted Louis Brandeis, who said, “We can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can’t have both.”"

Monday, March 4, 2019

Is Ethical A.I. Even Possible?; The New York Times, March 1, 2019

Cade Metz, The New York Times; Is Ethical A.I. Even Possible?

"As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation."

Saturday, February 16, 2019

Vatican, Microsoft team up on artificial intelligence ethics; The Washington Post, February 13, 2019

Associated Press via The Washington Post; Vatican, Microsoft team up on artificial intelligence ethics

"The Vatican says it is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence.

Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life.

The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence."

Wednesday, March 28, 2018

A Needle In A Legal Haystack Could Sink A Major Supreme Court Privacy Case; NPR, March 28, 2018

Nina Totenberg, NPR; A Needle In A Legal Haystack Could Sink A Major Supreme Court Privacy Case

"The question in the case is whether a U.S. technology company can refuse to honor a court-ordered U.S. search warrant seeking information that is stored at a facility outside the United States...

...[A]s the case came to the justices, they were going to have to apply current advanced technology to the Stored Communications Act, a law enacted in 1986, several years before email even became available for wide public use.

Amazingly, just three weeks after the Supreme Court argument, lo and behold, a Congress famous for gridlock passed legislation to modernize the law...

Titled the Cloud Act (Clarifying Lawful Overseas Use of Data), the statute was attached to the 2,232 page, $1.3 trillion omnibus spending bill."

Sunday, January 21, 2018

It's time to eliminate Oregon's digital divide: Guest opinion; Oregon Live, January 19, 2018

Brant Wolf, Oregon Live; 

It's time to eliminate Oregon's digital divide: Guest opinion


"Right now, Oregon's urban areas along the I-5 corridor have better and more reliable access to high-speed broadband internet than our fellow Oregonians in more rural areas. That inequity in access causes a "digital divide." People who happen to live in big cities have multiple options for high-speed internet, while those in certain rural and remote areas have limited options.

The everyday impacts of that divide are obvious. It's more difficult to start and maintain a new business without access to internet. It's more difficult for the unemployed to find jobs. Young students who don't have access to high-speed internet fall behind their classmates who do. It can also be more difficult to gain access to quality health care through telemedicine options.

But it doesn't have to be this way."

Thursday, January 18, 2018

In new book, Microsoft cautions humanity to develop AI ethics guidelines now; GeekWire, January 17, 2018

Monica Nickelsburg, GeekWire; 

In new book, Microsoft cautions humanity to develop AI ethics guidelines now


"This dangerous scenario is one of many posited in “The Future Computed,” a new book published by Microsoft, with a foreword by Brad Smith, Microsoft president and chief legal officer, and Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research group.

The book examines the use cases and potential dangers of AI technology, which will soon be integrated into many of the systems people use everyday. Microsoft believes AI should be developed with six core principles: “fair, reliable and safe, private and secure, inclusive, transparent, and accountable.”

Nimble policymaking and strong ethical guidelines are essential to ensuring AI doesn’t threaten equity or security, Microsoft says. In other words, we need to start planning now to avoid a scenario like the one facing the imaginary tech company looking for software engineers."

Thursday, August 3, 2017

To Protect Voting, Use Open-Source Software; New York Times, August 3, 2017

R. James Woolsey and Brian J. Fox, New York Times; To Protect Voting,Use Open-Source Software

"If the community of proprietary vendors, including Microsoft, would support the use of open-source model for elections, we could expedite progress toward secure voting systems.

With an election on the horizon, it’s urgent that we ensure that those who seek to make our voting systems more secure have easy access to them, and that Mr. Putin does not."

Tuesday, July 11, 2017

Microsoft Courts Rural America, And Politicians, With High-Speed Internet; NPR, All Tech Considered, July 11, 2017

Aarti Shahani, NPR, All Tech Considered; Microsoft Courts Rural America, And Politicians, With High-Speed Internet

"Millions of people in rural America don't have the Internet connectivity that those in cities take for granted. Microsoft is pledging to get 2 million rural Americans online, in a five-year plan; and the company is going to push phone companies and regulators to help get the whole 23.4 million connected."

Tuesday, January 3, 2017

The Great A.I. Awakening; New York Times, 12/14/16

Gideon Lewis-Kraus, New York Times; The Great A.I. Awakening:

"Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence."

Tuesday, December 6, 2016

Facebook, Twitter, Google and Microsoft team up to tackle extremist content; Guardian, 12/5/16

Olivia Solon, Guardian; Facebook, Twitter, Google and Microsoft team up to tackle extremist content:
"Google, Facebook, Twitter and Microsoft have pledged to work together to identify and remove extremist content on their platforms through an information-sharing initiative.
The companies are to create a shared database of unique digital fingerprints – known as “hashes” – for images and videos that promote terrorism. This could include recruitment videos or violent terrorist imagery or memes. When one company identifies and removes such a piece of content, the others will be able to use the hash to identify and remove the same piece of content from their own network...
Because the companies have different policies on what constitutes terrorist content, they will start by sharing hashes of “the most extreme and egregious terrorist images and videos” as they are most likely to violate “all of our respective companies” content policies, they said."

Wednesday, June 1, 2016

Why the World Is Drawing Battle Lines Against American Tech Giants; New York Times, 6/1/16

Farhad Manjoo, New York Times; Why the World Is Drawing Battle Lines Against American Tech Giants:
"Over the last decade, we have witnessed the rise of what I like to call the Frightful Five. These companies — Apple, Amazon, Facebook, Microsoft and Alphabet, Google’s parent — have created a set of inescapable tech platforms that govern much of the business world. The five have grown expansive in their business aims and invincible to just about any competition. Their collective powers are a source of pride and fear for Americans. These companies thoroughly dominate the news and entertainment industries, they rule advertising and retail sales, and they’re pushing into health care, energy and automobiles.
For all the disruptions, good and bad, Americans may experience as a result of the rise of the Frightful Five, there is one saving grace: The companies are American. Not only were they founded by Americans and have their headquarters here (complicated global tax structures notwithstanding), but they all tend to espouse American values like free trade, free expression and a skepticism of regulation. Until the surveillance revealed by the National Security Agency contractor Edward J. Snowden, many American tech companies were also more deferential to the American government, especially its requests for law enforcement help.
In the rest of the world, the Americanness of the Frightful Five is often seen as a reason for fear, not comfort. In part that’s because of a worry about American hegemony: The bigger these companies get, the less room they leave for local competition — and the more room for possible spying by the United States government."

Friday, March 25, 2016

Microsoft scrambles to limit PR damage over abusive AI bot Tay; Guardian, 3/24/16

Alex Hern, Guardian; Microsoft scrambles to limit PR damage over abusive AI bot Tay:
"Microsoft is battling to control the public relations damage done by its “millennial” chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet.
The chatbot, named “Tay” (and, as is often the case, gendered female), was designed to have conversations with Twitter users, and learn how to mimic a human by copying their speech patterns. It was supposed to mimic people aged 18–24 but a brush with the dark side of the net, led by emigrants from the notorious 4chan forum, instead taught her to tweet phrases such as “I fucking hate feminists and they should all die and burn in hell” and “HITLER DID NOTHING WRONG”."