Showing posts with label Trolley Problem. Show all posts
Showing posts with label Trolley Problem. Show all posts

Tuesday, December 26, 2023

The “Trolley Problem” Doesn’t Work for Self-Driving Cars The most famous thought experiment in ethics needs a rethink; IEEE Spectrum, December 12, 2023

, IEEE Spectrum ; The “Trolley Problem” Doesn’t Work for Self-Driving Cars  The most famous thought experiment in ethics needs a rethink

"“The trolley paradigm was useful to increase awareness of the importance of ethics for AV decision-making, but it is a misleading framework to address the problem,” says Dubljević, a professor of philosophy and science, technology and society at North Carolina State University. “The outcomes of each vehicle trajectory are far from being certain like the two options in the trolley dilemma [and] unlike the trolley dilemma, which describes an immediate choice, decision-making in AVs has to be programmed in advance.”...

“The goal is to create a decision-making system that avoids human biases and limitations due to reaction time, social background, and cognitive laziness, while at the same time aligning with human common sense and moral intuition,” says Dubljević. “For this purpose, it’s crucial to study human moral intuition by creating optimal conditions for people to judge.”"

Friday, April 10, 2020

Michael Schur On Ethics And Morality In A Crisis; WBUR, April 9, 2020

Meghna Chakrabarti and Brittany Knotts, WBUR; Michael Schur On Ethics And Morality In A Crisis

"How do you think "The Good Place" characters Eleanor, Chidi, Tahani and Jason would have responded to the coronavirus in their Earth lives?...

MICHAEL SCHUR, ON HOW JASON WOULD RESPOND TO THE CORONAVIRUS...

Jason: “Then there's Jason, who is just an idiot. And Jason, I don't know if you saw one of the craziest photos to me of all of the photos we've all been looking at, it was that county line in Florida. ... Where one county had shut down its beaches, and the other county had not. And so as a result, the beach was entirely empty. And there was just like suddenly just a wall of people extending further out, down the beach, because that county's beaches were open. Whatever county that is, that's where Jason would be. … So he and Eleanor probably wouldn't have been dissimilar in terms of the way they approached this, but for different reasons. Eleanor was selfish, and Jason was just sort of impulsive and didn't really think anything through.”...

What do we owe each other? 
Michael Schur: “I said before that there's a certain sort of minimum that is required of everyone, to the best of our abilities. The basics, right. Staying inside, staying away from people, trying to kind of stop the spread of the disease. But then beyond that, there's an enormous sliding scale, I think. If you have the ability to, for example, pay your dog walker, if you have the financial means to continue to pay your dog walker who can't walk your dog anymore, or someone who helps you clean your house, or anybody who works for you in any capacity. If you have that ability, I think you need to do that. And then, you know, you keep sliding up the scale. If you have the ability to keep people on the payroll at your business who are working for you, even if it means you lose money, I think you have to do that, too. And it just keeps going up and up and up."

Tuesday, January 29, 2019

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Thursday, March 8, 2018

Exploring AI ethics and accountability; Politico.eu, March 5, 2018

Nirvi Shah, Politico.eu; Exploring AI ethics and accountability

"In this special report on the future of artificial intelligence, we explore the technology’s implications. Are people ready to trust their lives to driverless cars? What about an AI doctor? Who’s to blame when price-setting algorithms work together to collude?

We also spoke to Armin Grunwald, an adviser to the German parliament tasked with mapping out the ethical implications of artificial intelligence. Grunwald, it turns out, has an answer to the trolley problem.

This article is part of the special report Confronting the Future of AI."

Friday, March 2, 2018

4 Philosophy Professors Weigh In on Why The Good Place Is So Forking Funny — and Important; Popsugar, February 28, 2018

Gwendolyn Purdom, Popsugar; 4 Philosophy Professors Weigh In on Why The Good Place Is So Forking Funny — and Important

"There's a scene in the second season of The Good Place where, in order to illustrate the classic moral dilemma known as The Trolley Problem, the characters are forced to live it. The famous thought experiment, which asks different variations of whether you would steer an unstoppable trolley into one person to avoid killing five, has long been a go-to for ethics scholars — but watching the show's hilariously gory take on it brought the lesson to life in a way Agnes Callard, an associate professor of philosophy at the University of Chicago, hadn't considered before. "There's something very violent about the thought experiment itself, like, we're asking them to imagine murdering people," Callard told POPSUGAR. "And the show just takes that really seriously, like, 'OK, let's reallyimagine it.'"

It's just one of the ways tuning into the NBC sitcom has been a fun first for philosophy and ethics professors like Callard, who aren't used to seeing their area of expertise at the center of a hit network comedy. Callard and the three other philosophy professors/The Good Place fans we talked to said that while pop culture has always reflected on philosophical themes, they don't remember a show or movie ever examining specific theories and works this explicitly."

Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018

Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"