Nell Gluckman, Chronicle of Higher Education; Can Higher Education Make Silicon Valley More Ethical?
"Jim Malazita, an assistant professor at Rensselaer Polytechnic Institute, hopes to infuse ethics lessons into core computer-science courses."...
"Q. You mentioned you’ve been getting some pushback.
A. I’ve had to do a lot of social work with computer-science faculty. The faculty were like, This sounds cool, but will they still be able to move on in computer science? We’re using different, messier data sets. Will they still understand the formal aspects of computing?
Q. What do you tell faculty members to convince them that this is a good use of your students’ time?
A. I use a couple of strategies that sometimes work, sometimes don’t. It’s surprisingly important to talk about my own technical expertise. I only moved into social science and humanities as a Ph.D. student. As an undergraduate, my degree was in digital media design. So you can trust me with this content.
It’s helpful to also cast it in terms of helping women and underrepresented-minority retention in computer science. These questions have an impact on all students, but especially women and underrepresented minorities who are used to having their voices marginalized. The faculty want those numbers up."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label marginalized voices. Show all posts
Showing posts with label marginalized voices. Show all posts
Thursday, March 15, 2018
Can Higher Education Make Silicon Valley More Ethical?; Chronicle of Higher Education, March 14, 2018
Tuesday, March 6, 2018
Here’s how Canada can be a global leader in ethical AI; The Conversation, February 22, 2018
Fenwick McKelvey, Abhishek Gupta, The Conversation; Here’s how Canada can be a global leader in ethical AI
"Putting Canada in the lead
"Putting Canada in the lead
Canada has a clear choice. Either it embraces the potential of being a leader in responsible AI, or it risks legitimating a race to the bottom where ethics, equity and justice are absent.
Better guidance for researchers on how the Canadian Charter of Rights and Freedomsrelates to AI research and development is a good first step. From there, Canada can create a just, equitable and stable foundation for a research agenda that situates the new technology within longstanding social institutions.
Canada also needs a more coordinated, inclusive national effort that prioritizes otherwise marginalized voices. These consultations will be key to positioning Canada as a beacon in this field.
Without these measures, Canada could lag behind. Europe is already drafting important new approaches to data protection. New York City launched a task force this fall to become a global leader on governing automated decision making. We hope this leads to active consultation with city agencies, academics across the sciences and the humanities as well as community groups, from Data for Black Lives to Picture the Homeless, and consideration of algorithmic impact assessments.
These initiatives should provide a helpful context as Canada develops its own governance strategy and works out how to include Indigenous knowledge within that.
If Canada develops a strong national strategy approach to AI governance that works across sectors and disciplines, it can lead at the global level.
Subscribe to:
Posts (Atom)