This blog spotlights issues and topics explored in my LIS 2194: Information Ethics graduate course—Technology Ethics, Privacy, Surveillance, Data Harvesting, IoT, Intellectual Property, AI Algorithms, Independent Press, Free Speech, Censorship, Cyberhacking, Weaponized Information, National Security, Cyberbullying—as well as Ethics topics of a more general nature, such as Integrity, Equality, Truth, Justice, Accountability, Civil Discourse, Transparency, Conflicts of Interest, and Inclusion.
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is saying how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually replace our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about 50 years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?