Susan Fourtané , Interesting Engineering; Roboethics: The Human Ethics Applied to Robots
Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
"On ethics and roboethics
Ethics is the branch of philosophy which studies human conduct, moral
assessments, the concepts of good and evil, right and wrong, justice
and injustice. The concept of roboethics brings up a fundamental ethical
reflection that is related to particular issues and moral dilemmas
generated by the development of robotic applications.
Roboethics --also called machine ethics-- deals with the code of
conduct that robotic designer engineers must implement in the Artificial
Intelligence of a robot. Through this kind of artificial ethics,
roboticists must guarantee that autonomous systems are going to be able
to exhibit ethically acceptable behavior in situations where robots or
any other autonomous systems such as autonomous vehicles interact with
humans.
Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF)
by Pawel Lichocki et al., published by IEEE Robotics and Automation
Magazine, the researchers list various ethical issues emerging in two
sets of robotic applications: Service robots and lethal robots."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label machine ethics. Show all posts
Showing posts with label machine ethics. Show all posts
Tuesday, October 1, 2019
Roboethics: The Human Ethics Applied to Robots; Interesting Engineering, September 22, 2019
Wednesday, July 1, 2015
Machine ethics: The robot’s dilemma; Nature, 7/1/15
Boer Deng, Nature; Machine ethics: The robot’s dilemma:
"How ethical robots are built could have major consequences for the future of robotics, researchers say. Michael Fisher, a computer scientist at the University of Liverpool, UK, thinks that rule-bound systems could be reassuring to the public. “People are going to be scared of robots if they're not sure what it's doing,” he says. “But if we can analyse and prove the reasons for their actions, we are more likely to surmount that trust issue.” He is working with Winfield and others on a government-funded project to verify that the outcomes of ethical machine programs are always knowable. By contrast, the machine-learning approach promises robots that can learn from experience, which could ultimately make them more flexible and useful than their more rigidly programmed counterparts. Many roboticists say that the best way forward will be a combination of approaches. “It's a bit like psychotherapy,” says Pereira. “You probably don't just use one theory.” The challenge — still unresolved — is to combine the approaches in a workable way."
Subscribe to:
Posts (Atom)