Susan Fourtané , Interesting Engineering; Roboethics: The Human Ethics Applied to Robots
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
"On ethics and roboethics
Ethics is the branch of philosophy which studies human conduct, moral
assessments, the concepts of good and evil, right and wrong, justice
and injustice. The concept of roboethics brings up a fundamental ethical
reflection that is related to particular issues and moral dilemmas
generated by the development of robotic applications.
Roboethics --also called machine ethics-- deals with the code of
conduct that robotic designer engineers must implement in the Artificial
Intelligence of a robot. Through this kind of artificial ethics,
roboticists must guarantee that autonomous systems are going to be able
to exhibit ethically acceptable behavior in situations where robots or
any other autonomous systems such as autonomous vehicles interact with
humans.
Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF)
by Pawel Lichocki et al., published by IEEE Robotics and Automation
Magazine, the researchers list various ethical issues emerging in two
sets of robotic applications: Service robots and lethal robots."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label roboticists. Show all posts
Showing posts with label roboticists. Show all posts
Tuesday, October 1, 2019
Roboethics: The Human Ethics Applied to Robots; Interesting Engineering, September 22, 2019
Friday, March 2, 2018
Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018
Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars
"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.
The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"
"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.
The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"
Saturday, December 21, 2013
Roboticist Illah Nourbakhsh explores the dark side of our "robot futures"; Pittsburgh City Paper, 12/18/13
Bill O'Driscoll, Pittsburgh City Paper; Roboticist Illah Nourbakhsh explores the dark side of our "robot futures" :
"Illah Nourbakhsh studies and designs robots for a living. But if you expect his new book, Robot Futures, to depict a care-free Tomorrowland of electronic butlers and automated fun, look elsewhere. The lively and accessible Robot Futures ($24.95, MIT Press) warns of a society warped by our relationships with a new "species" that knows more about us than we know about it ... and whose representatives are often owned by someone profiting at our expense.
The problem, says Nourbakhsh, is that we're racing into our Robot Future without considering the social, moral and legal implications."
Monday, November 25, 2013
Already Anticipating ‘Terminator’ Ethics; New York Times, 11/24/13
John Markoff, New York Times; Already Anticipating ‘Terminator’ Ethics:
"What could possibly go wrong?
That was a question that some of the world’s leading roboticists faced at a technical meeting in October, when they were asked to consider what the science-fiction writer Isaac Asimov anticipated a half-century ago: the need to design ethical behavior into robots...
All of which make questions about robots and ethics more than hypothetical for roboticists and policy makers alike.
The discussion about robots and ethics came during this year’s Humanoids technical conference. At the conference, which focused on the design and application of robots that appear humanlike, Ronald C. Arkin delivered a talk on “How to NOT Build a Terminator,” picking up where Asimov left off with his fourth law of robotics — “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
While he did an effective job posing the ethical dilemmas, he did not offer a simple solution. His intent was to persuade the researchers to confront the implications of their work."
Subscribe to:
Posts (Atom)