Showing posts with label self-driving cars. Show all posts
Showing posts with label self-driving cars. Show all posts

Tuesday, December 26, 2023

The “Trolley Problem” Doesn’t Work for Self-Driving Cars The most famous thought experiment in ethics needs a rethink; IEEE Spectrum, December 12, 2023

, IEEE Spectrum ; The “Trolley Problem” Doesn’t Work for Self-Driving Cars  The most famous thought experiment in ethics needs a rethink

"“The trolley paradigm was useful to increase awareness of the importance of ethics for AV decision-making, but it is a misleading framework to address the problem,” says Dubljević, a professor of philosophy and science, technology and society at North Carolina State University. “The outcomes of each vehicle trajectory are far from being certain like the two options in the trolley dilemma [and] unlike the trolley dilemma, which describes an immediate choice, decision-making in AVs has to be programmed in advance.”...

“The goal is to create a decision-making system that avoids human biases and limitations due to reaction time, social background, and cognitive laziness, while at the same time aligning with human common sense and moral intuition,” says Dubljević. “For this purpose, it’s crucial to study human moral intuition by creating optimal conditions for people to judge.”"

Monday, May 10, 2021

South Africa to introduce regulations around self-driving cars; BusinessTech, May 7, 2021

BusinessTech; South Africa to introduce regulations around self-driving cars

"The Department of Transport says that it plans to introduce new regulations around self-driving cars in South Africa, as it expects autonomous vehicles (AVs) to become a reality in the country in the not too distant future.

In its strategic performance plan for 2021/2022, the department said that these vehicles will move on streets with little or no control by humans.

It added that autonomous vehicles could solve a number of mobility issues for the country – including road safety, social inclusion, emissions and congestion.

“Government is putting in place policy, legislation and strategies to take advantage of the benefits associated with AVs, while also minimising risks and unpremeditated consequences,” it said.

“The new policy, legislation and strategies should provide a welcoming environment for testing and development of AV technology.”"

Tuesday, January 29, 2019

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Saturday, March 24, 2018

Can Self-Driving Cars Be Engineered to Be Ethical?; Voice of America, March 21, 2018

Bryan Lynn reported this story for VOA Learning English. Additional information came from Reuters and the Associated Press. Kelly Jean Kelly was the editor., Voice of America; Can Self-Driving Cars Be Engineered to Be Ethical?

"Nicholas Evans is a professor of philosophy at the University of Massachusetts in Lowell, Massachusetts...

Evans is receiving money from the National Science Foundation to study the ethics of decision-making algorithms in autonomous vehicles. He says self-driving cars need to be programmed to react to many difficult situations. But, he adds, even simple driving activities – such as having vehicles enter a busy street – can be dangerous...

One of the most basic questions is how to decide the value of human lives. Evans says most people do not like to think about this question. But, he says, it is highly important in developing self-driving technology...

“So this is one of the really tricky questions behind autonomous vehicles – is how do you value different people's lives and how do you program a car to value different people's lives.”"

THE LOSE-LOSE ETHICS OF TESTING SELF-DRIVING CARS IN PUBLIC; Wired, March 23, 2018

Aarian Marshall, Wired; THE LOSE-LOSE ETHICS OF TESTING SELF-DRIVING CARS IN PUBLIC

"The unfortunate truth is that there will always be tradeoffs. A functioning society should probably create space—even beyond the metaphorical sense—to research and then develop potentially life-saving technology. If you’re interested in humanity’s long-term health and survival, this is a good thing. (Even failure can be instructive here. What didn’t work, and why?) But a functioning society should also strive to guarantee that its citizens aren’t killed in the midst of beta testing. We’ve made this work for experimental drugs, finding an agreeable balance between risking lives today and saving them tomorrow."

Friday, March 2, 2018

Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018

Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"

Monday, February 20, 2017

The big moral dilemma facing self-driving cars; Washington Post, February 20, 2017

Steven Overly, Washington Post; The big moral dilemma facing self-driving cars

"Researchers at the University of Pennsylvania have dubbed this “algorithm aversion.” In a 2014 study, participants were asked to observe a computer and a human make predictions about the future, such as how a student would perform based on past test scores. Researchers found that “people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.”"

Sunday, November 20, 2016

Whose life should your car save?; Pittsburgh Post-Gazette, 11/20/16

Azim Shariff, Iyad Rahwan and Jean-Francois Bonnefon, Pittsburgh Post-Gazette; Whose life should your car save?; Whose life should your car save? :
"The widespread use of self-driving cars promises to bring substantial benefits to transportation efficiency, public safety and personal well-being. Car manufacturers are working to overcome the remaining technical challenges that stand in the way of this future. Our research, however, shows that there is also an important ethical dilemma that must be solved before people will be comfortable trusting their lives to these cars.
As the National Highway Traffic Safety Administration has noted, autonomous cars may find themselves in circumstances in which the car must choose between risks to its passengers and risks to a potentially greater number of pedestrians. Imagine a situation in which the car must either run off the road or plow through a large crowd of people: Whose risk should the car’s algorithm aim to minimize?
This dilemma was explored in studies that we recently published in the journal Science...
This is why, despite its mixed messages, Mercedes-Benz should be applauded for speaking out on the subject. The company acknowledges that to “clarify these issues of law and ethics in the long term will require broad international discourse.”"

Friday, August 12, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?; Slate, 8/11/16

Jacob Brogan, Slate; Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen? :
"Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?
It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way."