"The British Standards Institute (BSI) commissioned a group of scientists, academics, ethicists, and philosophers to provide guidance on potential hazards and protective measures. They presented their guidelines at a robotics conference in Oxford, England last week. "As far as I know this is the first published standard for the ethical design of robots," professor of robotics at the University of the West of England Alan Winfield told the Guardian... The EU, which Britain will soon leave, is also working on robot ethics standards. Its provisional code of conduct for robotics engineers and users includes provisions like "robots should act in the best interests of humans" and forbids users from modifying a robot to enable it to function as a weapon."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label Isaac Asimov's laws. Show all posts
Showing posts with label Isaac Asimov's laws. Show all posts
Wednesday, September 21, 2016
British Philosophers Consider the Ethics of a Robotic Future; PC Magazine, 9/20/16
Tom Brant, PC Magazine; British Philosophers Consider the Ethics of a Robotic Future:
Monday, September 19, 2016
Do no harm, don't discriminate: official guidance issued on robot ethics; Guardian, 9/18/16
Hannah Devlin, Guardian; Do no harm, don't discriminate: official guidance issued on robot ethics:
"Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.
The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.
Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.
“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”"
Subscribe to:
Posts (Atom)