Showing posts with label society. Show all posts
Showing posts with label society. Show all posts

Thursday, November 21, 2024

AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA; The Los Angeles Loyolan, November 18, 2024

Coleman Standifer, asst. managing editor; Grace McNeill, asst. managing editor , The Los Angeles Loyolan; AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA

"The Bellarmine College of Liberal Arts (BCLA) is taking steps to further educate students on artificial intelligence (AI) through the development of an “artificial intelligence, ethics and society," spearheaded by an AI task force. This proposed addition comes two years after the widespread adoption of OpenAI's ChatGPT in classrooms.

Prior to stepping into his role as the new dean of BCLA, Richard Fox, Ph.D., surveyed BCLA’s 175 faculty about how the college could best support their teaching. Among the top three responses from faculty were concerns about navigating AI in the classroom, Fox told the Loyolan.

As of now, BCLA has no college-wide policy on AI usage and allows instructors determine how AI is — or is not — utilized in the classroom.

“We usually don't dictate how people teach. That is the essence of academic freedom," said Fox. “What I want to make sure we're doing is we're preparing students to enter a world where they have these myriad different expectations on writing from their faculty members.”

Headed by Roberto Dell’Oro, Ph.D., professor of theological studies and director of the Bioethics Institute, the task force met over the summer and culminated in a proposal for a minor in BCLA. The proposal — which Dell'Oro sent to the Loyolan— was delivered to Fox in August and now awaits a formal proposal to be drawn up before approval, according to Dell’Oro.

The minor must then be approved by the Academic Planning and Review Committee (ARPC), a committee tasked with advising Provost Thomas Poon, Ph.D., on evaluating proposals for new programs.

According to the proposal, the proposed minor aims “to raise awareness about the implications of AI technologies, emphasize the importance of ethical considerations in its development and promote interdisciplinary research at the intersection of AI, ethics, and society.

The minor — if approved by the APRC — would have “four or five classes,” with the possibility of having an introductory course taught by faculty in the Seaver College of Science and Engineering, according to the proposal.

Most of the sample courses in the proposal include classes rooted in philosophy and ethics, such as, “AI, Robots, and the Philosophy of the Person,” “Could Robots Have Rights?” and “Introduction to Bioethics.” According to Dell’Oro, the hope is to have courses available for enrollment by Fall 2025."

Tuesday, March 14, 2023

Microsoft lays off team that taught employees how to make AI tools responsibly; The Verge, March 13, 2023

 ZOE SCHIFFERCASEY NEWTON, The Verge; Microsoft lays off team that taught employees how to make AI tools responsibly

"Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs."

Friday, October 4, 2019

Gatekeeping Is Not The Same As Censorship; Forbes, August 22, 2019

Kalev Leetaru, Forbes; Gatekeeping Is Not The Same As Censorship

"With each new effort by social media companies to reign in the deluge of digital falsehoods, accusations pour forth that such efforts represent censorship. In reality, the two represent very different concepts, with censorship referring to the repression of ideas in alignment to political, social or moral views, while gatekeeping in its broadest sense refers to efforts to maintain the quality of information published in a given venue. A censor prohibits discussion of topics with which they disagree. A gatekeeper is viewpoint-neutral, ensuring only that the information has been thoroughly vetted and verified...

In the end, both social platforms and society at large must recognize the clear distinction between the dangers of censorship and the benefits of gatekeeping."

Tuesday, February 26, 2019

When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019

Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?

"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...

It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models...

Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol... 

“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says [Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with it.”"

Tuesday, January 29, 2019

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Sunday, December 30, 2018

Colleges Grapple With Teaching the Technology and Ethics of A.I.; The New York Times, November 2, 2018

Alina Tugend, The New York Times;Colleges Grapple With Teaching the Technology and Ethics of A.I.


"At the University of Washington, a new class called “Intelligent Machinery, Identity and Ethics,” is being taught this fall by a team leader at Google and the co-director of the university’s Computational Neuroscience program.

Daniel Grossman, a professor and deputy director of undergraduate studies at the university’s Paul G. Allen School of Computer Science and Engineering, explained the purpose this way:

The course “aims to get at the big ethical questions we’ll be facing, not just in the next year or two but in the next decade or two.”

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said."

Monday, June 4, 2018

Stanford to step-up teaching of ethics in technology; Financial Times, June 3, 2018

Financial Times; Stanford to step-up teaching of ethics in technology

"The university at the heart of Silicon Valley is to inject ethics into its technology teaching and research amid growing criticism of the excesses of the industry it helped spawn.

The board of Stanford University, one of the world’s richest higher education institutions with an endowment of $27bn, will meet this month to agree funding and a plan to implement the findings of an internal review that recommends a new initiative focused on “ethics, society and technology” and improved access to those on lower incomes."