"Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man? It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label moral decisions made by machine intelligence. Show all posts
Showing posts with label moral decisions made by machine intelligence. Show all posts
Friday, August 12, 2016
Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?; Slate, 8/11/16
Jacob Brogan, Slate; Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen? :
Subscribe to:
Posts (Atom)