Showing posts with label society. Show all posts
Showing posts with label society. Show all posts

Tuesday, March 14, 2023

Microsoft lays off team that taught employees how to make AI tools responsibly; The Verge, March 13, 2023

 ZOE SCHIFFERCASEY NEWTON, The Verge; Microsoft lays off team that taught employees how to make AI tools responsibly

"Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs."

Friday, October 4, 2019

Gatekeeping Is Not The Same As Censorship; Forbes, August 22, 2019

Kalev Leetaru, Forbes; Gatekeeping Is Not The Same As Censorship

"With each new effort by social media companies to reign in the deluge of digital falsehoods, accusations pour forth that such efforts represent censorship. In reality, the two represent very different concepts, with censorship referring to the repression of ideas in alignment to political, social or moral views, while gatekeeping in its broadest sense refers to efforts to maintain the quality of information published in a given venue. A censor prohibits discussion of topics with which they disagree. A gatekeeper is viewpoint-neutral, ensuring only that the information has been thoroughly vetted and verified...

In the end, both social platforms and society at large must recognize the clear distinction between the dangers of censorship and the benefits of gatekeeping."

Tuesday, February 26, 2019

When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019

Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?

"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...

It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models...

Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol... 

“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says [Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with it.”"

Tuesday, January 29, 2019

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Sunday, December 30, 2018

Colleges Grapple With Teaching the Technology and Ethics of A.I.; The New York Times, November 2, 2018

Alina Tugend, The New York Times;Colleges Grapple With Teaching the Technology and Ethics of A.I.


"At the University of Washington, a new class called “Intelligent Machinery, Identity and Ethics,” is being taught this fall by a team leader at Google and the co-director of the university’s Computational Neuroscience program.

Daniel Grossman, a professor and deputy director of undergraduate studies at the university’s Paul G. Allen School of Computer Science and Engineering, explained the purpose this way:

The course “aims to get at the big ethical questions we’ll be facing, not just in the next year or two but in the next decade or two.”

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said."

Monday, June 4, 2018

Stanford to step-up teaching of ethics in technology; Financial Times, June 3, 2018

Financial Times; Stanford to step-up teaching of ethics in technology

"The university at the heart of Silicon Valley is to inject ethics into its technology teaching and research amid growing criticism of the excesses of the industry it helped spawn.

The board of Stanford University, one of the world’s richest higher education institutions with an endowment of $27bn, will meet this month to agree funding and a plan to implement the findings of an internal review that recommends a new initiative focused on “ethics, society and technology” and improved access to those on lower incomes."