Showing posts with label philosophers. Show all posts
Showing posts with label philosophers. Show all posts

Friday, April 29, 2022

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

Elsa Hahne, LSU Office of Research and Economic Development ; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Saturday, March 24, 2018

Driverless cars raise so many ethical questions. Here are just a few of them.; San Diego Union-Tribune, March 23, 2018

Lawrence M. Hinman, San Diego Union-Tribune; Driverless cars raise so many ethical questions. Here are just a few of them.

"Even more troubling will be the algorithms themselves, even if the engineering works flawlessly. How are we going to program autonomous vehicles when they are faced with a choice among competing evils? Should they be programmed to harm or kill the smallest number of people, swerving to avoid hitting two people but unavoidably hitting one? (This is the famous “trolley problem” that has vexed philosophers and moral psychologists for over half a century.)

Should your car be programmed to avoid crashing into a group of schoolchildren, even if that means driving you off the side of a cliff? Most of us would opt for maximizing the number of lives saved, except when one of those lives belongs to us or our loved ones.

These are questions that take us to the heart of the moral life in a technological society. They are already part of a lively and nuanced discussion among philosophers, engineers, policy makers and technologists. It is a conversation to which the larger public should be invited.

The ethics of dealing with autonomous systems will be a central issue of the coming decades."

Friday, March 2, 2018

Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018

Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"