Showing posts with label ethics washing. Show all posts
Showing posts with label ethics washing. Show all posts

Sunday, March 31, 2024

Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024

 Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."

Wednesday, June 21, 2023

Ethics Teams in Tech Are Stymied by Lack of Support; Stanford University Human-Centered Artificial Intelligence (HAI), June 21, 2023

,  Stanford University Human-Centered Artificial Intelligence (HAI); Ethics Teams in Tech Are Stymied by Lack of Support

"In recent years, AI companies have been publicly chided for generating machine learning algorithms that discriminate against historically marginalized groups. To quell that criticism, many companies pledged to ensure their products are fair, transparent, and accountable, but these promises are frequently criticized as being mere “ethics washing,” says Sanna Ali, who recently received her PhD from the Stanford University Department of Communication in the School of Humanities and Sciences. “There’s a concern that these companies talk the talk but don’t walk the walk.”

To explore whether that’s the case, Ali interviewed AI ethics workers from some of the largest companies in the field. The research project, co-authored with Stanford Assistant Professor of Communication Angèle Christin, Google researcher Andrew Smart, and Stanford W.M Keck Professor and Professor of Management Science and Engineering Riitta Katila, was partially funded by a seed grant from Stanford HAI and published in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency(FAccT ’23). The study found that ethics initiatives and interventions were difficult to implement in the tech industry’s institutional environment. Specifically, Ali found, teams were largely under-resourced and under-supported by leadership, and they lacked authority to act on problems they identified."

Sunday, January 23, 2022

The Humanities Can't Save Big Tech From Itself; Wired, January 12, 2022

, Wired; The Humanities Can't Save Big Tech From Itself

 "I’ve been studying nontechnical workers in the tech and media industries for the past several years. Arguments to “bring in” sociocultural experts elide the truth that these roles and workers already exist in the tech industry and, in varied ways, always have. For example, many current UX researchers have advanced degrees in sociology, anthropology, and library and information sciences. And teachers and EDI (Equity, Diversity, and Inclusion) experts often occupy roles in tech HR departments.

Recently, however, the tech industry is exploring where nontechnical expertise might counter some of the social problems associated with their products. Increasingly, tech companies look to law and philosophy professors to help them through the legal and moral intricacies of platform governance, to activists and critical scholars to help protect marginalized users, and to other specialists to assist with platform challenges like algorithmic oppression, disinformation, community management, user wellness, and digital activism and revolutions. These data-driven industries are trying hard to augment their technical know-how and troves of data with social, cultural, and ethical expertise, or what I often refer to as “soft” data.

But you can add all of the soft data workers you want and little will change unless the industry values that kind of data and expertise. In fact, many academics, policy wonks, and other sociocultural experts in the AI and tech ethics space are noticing a disturbing trend of tech companies seeking their expertise and then disregarding it in favor of more technical work and workers...

Finally, though the librarian profession is often cited as one that might save Big Tech from its disinformation dilemmas, some in LIS (Library and Information Science) argue they collectively have a long way to go before they’re up to the task. Safiya Noble noted the profession’s (just over 83% white) “colorblind” ideology and sometimes troubling commitment to neutrality. This commitment, the book Knowledge Justice explains, leads to many librarians believing, “Since we serve everyone, we must allow materials, ideas, and values from everyone.” In other words, librarians often defend allowing racist, transphobic, and other harmful information to stand alongside other materials by saying they must entertain “all sides” and allow people to find their way to the “best” information. This is the exact same error platforms often make in allowing disinformation and abhorrent content to flourish online."

Saturday, January 15, 2022

We’re failing at the ethics of AI. Here’s how we make real impact; World Economic Forum, January 14, 2022