Showing posts with label ethical dilemmas. Show all posts
Showing posts with label ethical dilemmas. Show all posts

Thursday, November 14, 2019

The Ethical Dilemma at the Heart of Big Tech Companies; Harvard Business Review, November 14, 2019


"The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures..."

Tuesday, October 1, 2019

Roboethics: The Human Ethics Applied to Robots; Interesting Engineering, September 22, 2019

, Interesting Engineering; Roboethics: The Human Ethics Applied to Robots 

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
"On ethics and roboethics 

Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications. 

Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.

Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots."

Thursday, September 5, 2019

AI Ethics Guidelines Every CIO Should Read; Information Week, August 7, 2019

John McClurg, Information Week; AI Ethics Guidelines Every CIO Should Read

"Technology experts predict the rate of adoption of artificial intelligence and machine learning will skyrocket in the next two years. These advanced technologies will spark unprecedented business gains, but along the way enterprise leaders will be called to quickly grapple with a smorgasbord of new ethical dilemmas. These include everything from AI algorithmic bias and data privacy issues to public safety concerns from autonomous machines running on AI.

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework."

Wednesday, March 6, 2019

Olympic champion shares personal experience on the importance of ethics; NTVabc, March 5, 2019

Lauren Kummer, NTVabcOlympic champion shares personal experience on the importance of ethics

"On Tuesday, it was Ethics Day at the University of Nebraska at Kearney.
Naber spoke to students on character and ethics in a way that's relevant to everyday life.
"I think it's important to talk about what's in the best common good. Not in what's in your best interest but what is in our best interest," said Naber.

Naber shared stories on his own, and one in particular that put him in a tough situation during the 1973 World Team Trials where ethics came into question.

"I won the race but I didn't touch the wall correctly. The official thought I should be disqualified. The meet referee wasn't sure and they let me decide. Did I intend to fight the call? I remembered I didn't touch the wall. I said "I deserve to be disqualified" and I was. For that, I lost the chance to win a gold medal at the world championships but I earned my own self-respect. Of all the decisions I made in my swimming and athletic career I think that might be the highlight," said Naber."

Thursday, January 24, 2019

I Found $90 in the Subway. Is It Yours?; The New York Times, January 24, 2019

Niraj Chokshi, The New York Times; I Found $90 in the Subway. Is It Yours?

"As I got off a train in Manhattan on Wednesday, I paid little attention to a flutter out of the corner of my eye on the subway. Then another passenger told me that I had dropped some money.

“That isn’t mine,” I told her as I glanced at what turned out to be $90 on the ground.

I realized the flutter had been the money falling out of the coat of a man standing near me who had just stepped off the train.

The doors were about to close, and no one was acting, so I grabbed the cash and left the train. But I was too late. The man had disappeared into the crowd. I waited a few minutes to see if he would return, but he was long gone. I tried to find a transit employee or police officer, but none were in sight.

I was running late, so I left. But now what? What are you supposed to do with money that isn’t yours?"

Sunday, December 9, 2018

In China, Gene-Edited Babies Are the Latest in a String of Ethical Dilemmas; The New York Times, November 30, 2018

Sui-Lee Wee and Elsie Chen, The New York Times;
In China, Gene-Edited Babies Are the Latest in a String of Ethical Dilemmas



"China has set its sights on becoming a leader in science, pouring millions of dollars into research projects and luring back top Western-educated Chinese talent. The country’s scientists are accustomed to attention-grabbing headlines by their colleagues as they race to dominate their fields.

But when He Jiankui announced on Monday that he had created the world’s first genetically edited babies, Chinese scientists — like those elsewhere — denounced it as a step too far. Now many are asking whether their country’s intense focus on scientific achievement has come at the expense of ethical standards.

Friday, November 30, 2018

In Yemen, Lavish Meals forFew, Starvation for Many and a Dilemma for Reporters; The New York Times, November 29, 2018

Declan Walsh, The New York Times; 


"For a reporter, that brings a dilemma. Journalists travel with bundles of hard currency, usually dollars, to pay for hotels, transport and translation. A small fraction of that cash might go a long way for a starving family. Should I pause, put down my notebook and offer to help?

It’s a question some readers asked after we published a recent article on Yemen’s looming famine."

Thursday, May 17, 2018

MIT Now Has a Humanist Chaplain to Help Students With the Ethics of Tech; The Atlantic, May 16, 2018

Isabel Fattal, The Atlantic; MIT Now Has a Humanist Chaplain to Help Students With the Ethics of Tech

"Even some of the most powerful tech companies start out tiny, with a young innovator daydreaming about creating the next big thing. As today’s tech firms receive increased moral scrutiny, it raises a question about tomorrow’s: Is that young person thinking about the tremendous ethical responsibility they’d be taking on if their dream comes true?

Greg Epstein, the recently appointed humanist chaplain at MIT, sees his new role as key to helping such entrepreneurial students think through the ethical ramifications of their work. As many college students continue to move away from organized religion, some universities have appointed secular chaplains like Epstein to help non-religious students lead ethical, meaningful lives. At MIT, Epstein plans to spark conversations about the ethics of technology—conversations that will sometimes involve religious groups on campus, and that may sometimes carry over to Harvard, where he has held (and will continue to hold) the same position since 2005.

I recently spoke with Epstein about how young people can think ethically about going into the tech industry and what his role will look like..."

Thursday, March 22, 2018

A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable; Forbes, March 21, 2018

Oliver Smith, Forbes; A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable

"Over the last year, 4 million people took part by answering ethical questions in Moral Machine's many scenarios – which include different combinations of genders, ages, and even other species like cats and dogs, crossing the road.

On Sunday, the day before the first pedestrian fatality by an autonomous car in America, MIT's Professor Iyad Rahwan revealed the first results of the Moral Machine study at the Global Education and Skills Forum in Dubai."

Sunday, January 14, 2018

Bad behavior and rise in ethical dilemmas are an advantage to Denver’s Convercent, which just raised $25M; Denver Post, December 19, 2017

Tamara Chuang, Denver Post; 

Bad behavior and rise in ethical dilemmas are an advantage

 to Denver’s Convercent, which just raised $25M


"Ethics software developer Convercent said Tuesday it raised $25 million in new funding. The investment was led by Rho Ventures.

The Denver firm has seen interest in its software surge as tech companies and others battle ethical issues that went public, such as Uber’s problems with workplace harassment. Uber is reportedly a new client. Convercent’s software can pop up a reminder to employees when they’re facing a potential issue, such as rules that kick in when traveling overseas. But closer to home, companies are reaching out to Convercent in the wake of celebrity sexual harassment scandals."

Tuesday, July 11, 2017

Stanford's Final Exams Pose Question About the ethics of Genetic Engineering; Futurism, July 9, 2017

Tom Ward, Futurism; Stanford's Final Exams Pose Question About the ethics of Genetic Engineering

"When bioengineering students sit down to take their final exams for Stanford University, they are faced with a moral dilemma, as well as a series of grueling technical questions that are designed to sort the intellectual wheat from the less competent chaff:
If you and your future partner are planning to have kids, would you start saving money for college tuition, or for printing the genome of your offspring?
The question is a follow up to “At what point will the cost of printing DNA to create a human equal the cost of teaching a student in Stanford?” Both questions refer to the very real possibility that it may soon be in the realm of affordability to print off whatever stretch of DNA you so desire, using genetic sequencing and a machine capable of synthesizing the four building blocks of DNA — A, C, G, and T — into whatever order you desire...
It is vital to discuss the ethics of gene editing in order to ensure that the technology is not abused in the future. Stanford’s question is praiseworthy because it makes today’s students, who will most likely be spearheading the technology’s developments, think about the consequences of their work."

Monday, July 10, 2017

‘Spider-Man: Homecoming’ is a superheroic meditation on how to be a good person; Washington Post, July 10, 2017

Alyssa Rosenberg, Washington Post; ‘Spider-Man: Homecoming’ is a superheroic meditation on how to be a good person

[Kip Currier: Spoiler-Free Comment: I saw Spider-Man: Homecoming this weekend and it's great--for all of the reasons (and more) that Alyssa Rosenberg identifies in her column today. The film doesn't actually quote the oft-quoted Spider-Man touchstone "With Great Power Comes Great Responsibility", but you feel its invisible presence throughout the film's narrative arc.]

"This column discusses the plot, and ethical dilemmas, of “Spider-Man: Homecoming.”


“Spider-Man: Homecoming,” which zipped into theaters last weekend, is almost everything a summer blockbuster should be: It’s very funny without using humor as an excuse to be less than emotionally accessible; its super-sized throw-downs are anchored in real, human-scale conflicts; its world is richly populated with characters who aren’t solely defined by their powers or lack thereof; and it resists the urge to revisit the most famous story beats associated with its title character’s origin story. All of these elements made “Spider-Man” only the second blockbuster this year I’m eager to rewatch as soon as possible. And another element has left me thinking of it with more than mere amusement: “Spider-Man: Homecoming” is at its most poignant when it’s concerned with how to be a good person — often, specifically, a good man."

Wednesday, May 24, 2017

Stanford scholars, researchers discuss key ethical questions self-driving cars present; Stanford News, May 22, 2017

Alex Shashkevich, Stanford News; 

Stanford scholars, researchers discuss key ethical questions self-driving cars present


"Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is saying how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually replace our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about 50 years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated scenarios, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society."

Saturday, December 31, 2016

What You Can Do to Improve Ethics at Your Company; Harvard Business Review (HBR), 12/29/16

  • Christopher McLaverty
  • Annie McKee, Harvard Business Review (HBR); 

  • What You Can Do to Improve Ethics at Your Company:

    "Enron. Wells Fargo. Volkswagen. It’s hard for good, ethical people to imagine how these meltdowns could possibly happen. We assume it’s only the Ken Lays and Bernie Madoffs of the world who will cheat people. But what about the ordinary engineers, managers, and employees who designed cars to cheat automotive pollution controls or set up bank accounts without customers’ permission? We tell ourselves that we would never do those things. And, in truth, most of us won’t cook the books, steal from customers, or take that bribe.

    But, according to a study by one of us (Christopher) of C-suite executives from India, Colombia, Saudi Arabia, the U.S., and the U.K., many of us face an endless stream of ethical dilemmas at work. In-depth interviews with these leaders provide some insight and solutions that can help us when we do face these quandaries."

    Sunday, December 18, 2016

    The Wild West of Robotic "Rights and Wrongs"; Ethics and Information Blog, 12/18/16

    Kip Currier, Ethics and Information Blog; The Wild West of Robotic "Rights and Wrongs"
    The challenge of "robot ethics"--how to imbue robotic machines and artificial intelligence (AI) with the "right" programming and protocols to make ethical decisions--is a hot topic in academe and business. Particularly right now, related to its application in autonomous self-driving vehicles (e.g. Uber, Apple, Google).
    When we think about ethical questions addressing how robots should or should not act, Isaac Asimov's oft-discussed "Three Laws of Robotics", spelled out in his 1942 short story "Runaround", certainly come to mind (see here).
    Themes of robots making judgments of "right and wrong", as well as ethical topics exploring AI accountability and whether "human rights" should be inclusive of "rights-for-robots", have also been prominent in depictions of robots and AI in numerous science fiction films and TV shows over the past 50+ years: Gort in The Day The Earth Stood Still (1951) and (2008) (Klaatu...Barada...Nikto!). 2001: A Space Odyssey (1968) and the monotonal, merciless HAL 9000 ("Open the pod bay doors, Hal"). 1983's War Games, starring Brat Pack-ers Matthew Broderick and Ally Sheedy, can also be seen as a cautionary tale of ethical-decision-making-gone-awry in a proto-machine learning gaming program ("Shall we play a game?"), used for then-Cold War military and national security purposes.
    Blade Runner (1982) revealed Replicants-with-an-expiration-date-on-the-run. (We'll have to wait and see what's up with the Replicants until sequel Blade Runner 2049 debuts in late 2017.) Arnold Schwarznegger played a killer-robot from the future in The Terminator (1984), and returned as a reprogrammed/converted "robot savior" in Terminator 2: Judgment Day (1991). Star Trek: The Next Generation (1987-1994) throughout its run explored "sentience" and the nature of humans AND non-humans "being human", as seen through the eyes of Enterprise android crew member "Commander Data" (see 1987 standout episode "The Measure of a Man"). Fifth column sometimes-sleeper Cylons with "many copies" and "a plan" were the driving force in 2004-2009's Battlestar Galactica. Will Smith portrayed a seriously robophobic cop hot on the heels of a homicidal robot suspect in the Asimov-short-story-collection-suggested I, Robot (2004).
    Most recently, robots are front and center (if not always readily identifiable!) in this year's breakout HBO hit Westworld (see the official Opening Credits here). Short-hand for the show's plot: "robots in an American West-set amusement park for the human rich". But it's a lot more than that. Westworld is an inspired reimagining ("Game of Thrones" author George R.R. Martin recently called this first season of “Westworld” a "true masterpiece") of the same-named, fairly-forgettable (--but for Yul Brynner's memorable robot role, solely credited as "Gunslinger"!) 1973 Michael Crichton-written/directed film. What the 1973 version lacked in deep-dive thoughts, the new version makes up for in spades, and then some: This is a show about robots (but really, the nature of consciousness and agency) for thinking people.--With, ahem, unapologetic dashes of Games of Thrones-esque sex and violence ("It's Not TV. It's HBO.(R)") sprinkled liberally throughout.
    Much of the issue of robot ethics has tended to center on the impacts of robots on humans. With "impacts" often meaning, at a minimum, job obsolescense for humans (see here and here). Or, at worst, (especially in terms of pop culture narratives) euphemistic code for "death and destruction to humans". (Carnegie Mellon University PhD and author David H. Wilson's 2011 New York Times best-selling Robopocalypse chillingly tapped into fears of a "Digital Axis of Evil"--AI/robots/Internet-of-Things--Revolution of robotic rampage and revenge against humans, perceived as both oppressors and inferior. This year Stephen Hawking and Elon Musk, among others (from 2015, see here and here), also voiced real-world concerns about the threats AI may hold for future humanity.)
    But thought-provoking, at times unsettling and humanizing depictions of robotic lifeforms--Westworld "hosts" Maeve and Dolores et al., robot boy David in Steven Spielberg's 2001 A.I. Artificial Intelligence, as well as animated treatments in 2008's WALL-E from Pixar and 2016's Hum (see post below linked here)--are leveling this imbalance. Flipping the "humancentric privilege" and spurring us to think about the impacts of human beings on robots. What ethical considerations, if any, are owed to the latter? Whether robots/AI can and should be (will be?) seen as emergent "forms of life". Perhaps even with "certain inalienable Rights" (Robot Lives Matter?).
    (Aside: As a kid who grew up watching the "Lost in Space" TV show (1965-1968) in syndication in the 1970's, I'll always have a soft spot for the Robinson family's trusty robot ("Danger, Will Robinson, Danger!") simply called...wait for it..."Robot".)
    In the meantime--at least until sentient robots can think about "the nature of their own existence" a la Westworld, or the advent of the "singularity" (sometimes described as the merging of man and machine and/or the moment when machine intelligence surpasses that of humans)--these fictionalized creations serve as allegorical constructs to ponder important, enduring questions: What it means to be "human". The nature of "right" and "wrong", and the shades in between. Interpretations of societal values, like "compassion", "decency", and "truth". And what it means to live in a "civilized" society. Sound timely?

    Tuesday, November 22, 2016

    Teaching an Algorithm to Understand Right and Wrong; Harvard Business Review, 11/15/16

    Greg Satell, Harvard Business Review; Teaching an Algorithm to Understand Right and Wrong:
    [Kip Currier: This thought-provoking Harvard Business Review article by Greg Satell marks the 1,000th post to this blog in 2016--more than I've posted in the entire previous 6 years since starting the Ethics blog in 2010. Anecdotally, and unsurprisingly, much of this year's posted corpus was plucked from the avalanche of ethics-related content generated via the most tumultuous Presidential Election in U.S. history. Ethics issues in Cyberhacking, Email and Social Media usage, Cyberbullying, Sexual Harassment, Diversity and Inclusion, Surveillance, Privacy, Censorship, "Truth" (Aside #1: Oxford Dictionaries recently declared "post-truth" the Word of the Year!), Fact-checking, Media responsibility, Self-driving cars (Aside #2: I passed an Uber test car on the way to the University of Pittsburgh campus this morning, giving a nod and a thumbs up to the two people sitting in the front seats of their metallic grey-hued autonomous vehicle wending the curves of Bigelow Boulevard's bluffs), and AI ethics provided a glut (and at times, like this summer, what felt like an unrelenting information-tsunami) of postable fodder. Last week's post-2016 election White-Hot-Topic, "Fake News"--with first-hand accounts of cringe-worthy click-incentivized content crafted by transparently unrepentant fake news scribes--remains a still-smoldering one for the blogosphere this week, vying with Conflicts of Interest for the #1 spot at the top. Or bottom, as it were. No risk in predicting that all of these thorny topics will continue to be dissected and debated in 2017. And that ethics issues--both in general and those with an information twist--are more relevant and wide-ranging than ever in our wired world. ]
    "In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.
    Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, and Rawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.
    Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon."

    Sunday, November 20, 2016

    Whose life should your car save?; Pittsburgh Post-Gazette, 11/20/16

    Azim Shariff, Iyad Rahwan and Jean-Francois Bonnefon, Pittsburgh Post-Gazette; Whose life should your car save?; Whose life should your car save? :
    "The widespread use of self-driving cars promises to bring substantial benefits to transportation efficiency, public safety and personal well-being. Car manufacturers are working to overcome the remaining technical challenges that stand in the way of this future. Our research, however, shows that there is also an important ethical dilemma that must be solved before people will be comfortable trusting their lives to these cars.
    As the National Highway Traffic Safety Administration has noted, autonomous cars may find themselves in circumstances in which the car must choose between risks to its passengers and risks to a potentially greater number of pedestrians. Imagine a situation in which the car must either run off the road or plow through a large crowd of people: Whose risk should the car’s algorithm aim to minimize?
    This dilemma was explored in studies that we recently published in the journal Science...
    This is why, despite its mixed messages, Mercedes-Benz should be applauded for speaking out on the subject. The company acknowledges that to “clarify these issues of law and ethics in the long term will require broad international discourse.”"

    Tuesday, October 11, 2016

    Glenn Beck Says Opposing Trump Is ‘Moral, Ethical’ Even if It Means Clinton Wins; New York Times, 10/11/16

    Liam Stack, New York Times; Glenn Beck Says Opposing Trump Is ‘Moral, Ethical’ Even if It Means Clinton Wins:
    "Glenn Beck, the fiery conservative media personality and former Fox News host, says that he briefly considered voting for Hillary Clinton and called opposing Donald J. Trump the “moral, ethical choice” — even if doing so leads to Mrs. Clinton winning the presidential election.
    His comments were made after the release on Friday of a 2005 recording of Mr. Trump boasting about sexual assault that set off a war between the presidential nominee and a broad swath of the Republican establishment, including Paul D. Ryan, the House speaker.
    Reacting to the recording, Mr. Beck wrote over the weekend that each person “must decide what is a bridge too far” and said he supported calls for Mr. Trump to withdraw from the presidential race.
    “It is not acceptable to ask a moral, dignified man to cast his vote to help elect an immoral man who is absent decency or dignity,” Mr. Beck wrote on Facebook. “If the consequence of standing against Trump and for principles is indeed the election of Hillary Clinton, so be it. At least it is a moral, ethical choice.”"

    Monday, September 5, 2016

    Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars; MIT Technology Review, 9/2/16

    Andrew Rosenblum, MIT Technology Review; Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars:
    "Rapid progress on autonomous driving has led to concerns that future vehicles will have to make ethical choices, for example whether to swerve to avoid a crash if it would cause serious harm to people outside the vehicle.
    Christopher Hart, chairman of the National Transportation Safety Board, is one of them. He told MIT Technology Review that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be...
    Hart also said there would need to be rules for how ethical prerogatives are encoded into software. He gave the example of a self-driving car faced with a decision between a potentially fatal collision with an out-of-control truck or heading up on the sidewalk and hitting pedestrians. “That to me is going to take a federal government response to address,” said Hart. “Those kinds of ethical choices will be inevitable.”
    That NHTSA has been evaluating how it will regulate driverless cars for the past eight months, and will release guidance in the near future. The agency hasn't so far discussed ethical concerns about automated driving.
    What regulation exists for self-driving cars comes from states such as California, and is targeted at the prototype vehicles being tested by companies such as Alphabet and Uber."

    Friday, August 12, 2016

    Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?; Slate, 8/11/16

    Jacob Brogan, Slate; Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen? :
    "Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?
    It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way."