Showing posts with label robot ethics. Show all posts
Showing posts with label robot ethics. Show all posts

Sunday, December 18, 2016

An Abused, Dishwashing Robot Dreams of an Escape; Slate, 12/17/16

Madeline Raynor, Slate; An Abused, Dishwashing Robot Dreams of an Escape:
"Hum," above, is a science-fiction short from director Tom Teller and Frame 48. It follows a robot that works as a dishwasher in a restaurant, confined to a small, poorly lit room and abused by a cruel human boss."

Monday, September 19, 2016

Do no harm, don't discriminate: official guidance issued on robot ethics; Guardian, 9/18/16

Hannah Devlin, Guardian; Do no harm, don't discriminate: official guidance issued on robot ethics: "Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.
The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.
Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.
“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”"

Wednesday, July 1, 2015

Machine ethics: The robot’s dilemma; Nature, 7/1/15

Boer Deng, Nature; Machine ethics: The robot’s dilemma:
"How ethical robots are built could have major consequences for the future of robotics, researchers say. Michael Fisher, a computer scientist at the University of Liverpool, UK, thinks that rule-bound systems could be reassuring to the public. “People are going to be scared of robots if they're not sure what it's doing,” he says. “But if we can analyse and prove the reasons for their actions, we are more likely to surmount that trust issue.” He is working with Winfield and others on a government-funded project to verify that the outcomes of ethical machine programs are always knowable.
By contrast, the machine-learning approach promises robots that can learn from experience, which could ultimately make them more flexible and useful than their more rigidly programmed counterparts. Many roboticists say that the best way forward will be a combination of approaches. “It's a bit like psychotherapy,” says Pereira. “You probably don't just use one theory.” The challenge — still unresolved — is to combine the approaches in a workable way."