Showing posts with label accountability mechanisms. Show all posts
Showing posts with label accountability mechanisms. Show all posts

Sunday, March 31, 2024

Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024

 Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."

Thursday, August 30, 2018

Ethics in Computing Panel; InfoQ, August 28, 2018

[Video] InfoQ; Ethics in Computing Panel

"Summary
 
The panelists discuss the important points around privacy, security, safety online, and intent of software today." 


"Kathy Pham is currently researching the Ethics and Governance of Artificial Intelligence and Software Engineering at the Harvard Berkman Klein Center and MIT Media Lab."

Kathy Pham quote from video: 

[13:11 in video] "What a good engineer is maybe is something we should rethink as well.

I spend a lot of time in academia now. And I hear over and over again that people who are of the computer science plus philosophy or computer science plus social science background, have the hardest time finding jobs. Even if they're within the CS Department they have such a hard time getting jobs because they're not like the real hard science, or the real hard engineering discipline...

Those kinds of people provide a really different perspective on how we build our products. So if you're in charge of hiring for your companies, perhaps we all just need to rethink how we hire people and what makes a good engineer."

"Natalie Evans Harris is COO and VP of Ecosystem Development at BrightHive."

Natalie Evans Harris quote from video:

[12:28 in video:] "While we look at resumes and we care where you get your skills and degrees from, we also want to know what your ethical code of conduct is."