Showing posts with label autonomous systems. Show all posts
Showing posts with label autonomous systems. Show all posts

Thursday, November 9, 2023

How robots can learn to follow a moral code; Nature, October 26, 2023

 Neil Savage, Nature; How robots can learn to follow a moral code

"Many computer scientists are investigating whether autonomous systems can be taught to make ethical choices, or to promote behaviour that aligns with human values. Could a robot that provides care, for example, be trusted to make choices in the best interests of its charges? Or could an algorithm be relied on to work out the most ethically appropriate way to distribute a limited supply of transplant organs? Drawing on insights from cognitive science, psychology and moral philosophy, computer scientists are beginning to develop tools that can not only make AI systems behave in specific ways, but also perhaps help societies to define how an ethical machine should act...

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple."

Tuesday, October 1, 2019

Roboethics: The Human Ethics Applied to Robots; Interesting Engineering, September 22, 2019

, Interesting Engineering; Roboethics: The Human Ethics Applied to Robots 

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
"On ethics and roboethics 

Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications. 

Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.

Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots."