Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label "first step towards embedding ethical values into robotics and AI". Show all posts
Showing posts with label "first step towards embedding ethical values into robotics and AI". Show all posts
Monday, September 19, 2016
Do no harm, don't discriminate: official guidance issued on robot ethics; Guardian, 9/18/16
Hannah Devlin, Guardian; Do no harm, don't discriminate: official guidance issued on robot ethics:
"Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.
The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.
Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.
“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”"
Subscribe to:
Posts (Atom)