The unnatural ethics of AI could be its undoing , The Outline;
"When I used to teach philosophy at universities, I always resented
having to cover the Trolley Problem, which struck me as everything the
subject should not be: presenting an extreme situation, wildly detached
from most dilemmas the students would normally face, in which our agency
is unrealistically restricted, and using it as some sort of ideal model
for ethical reasoning (the first model of ethical reasoning that many
students will come across, no less). Ethics should be about things like
the power structures we enter into at work, what relationships we decide
to pursue, who we are or want to become — not this fringe-case
intuition-pump nonsense.
But maybe I’m wrong. Because, if we
believe tech gurus at least, the Trolley Problem is about to become of
huge real-world importance. Human beings might not find themselves in
all that many Trolley Problem-style scenarios over the course of their
lives, but soon we're going to start seeing self-driving cars on our
streets, and they're going to have to make these judgments all the time.
Self-driving cars are potentially going to find themselves in all sorts
of accident scenarios where the AI controlling them has to decide which
human lives it ought to preserve. But in practice what this means is
that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...
I'm much more sympathetic to the “AI is bad” line. We have little reason
to trust that big tech companies (i.e. the people responsible for
developing this technology) are doing it to help us, given how wildly
their interests diverge from our own."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label moral reasoning. Show all posts
Showing posts with label moral reasoning. Show all posts
Tuesday, January 29, 2019
Saturday, September 15, 2018
Please, students, take that ‘impractical’ humanities course. We will all benefit.; The Washington Post, September 14, 2018
Ronald J. Daniels, The Washington Post; Please, students, take that ‘impractical’ humanities course. We will all benefit.
"Ronald J. Daniels is the president of Johns Hopkins University. This op-ed is adapted from a letter to Hopkins students...
I would have also mentioned to the student who shunned the philosophy course that he was misinformed about the job market. It is true that many employers are looking for graduates with specialized technical skills, but they also look for other capabilities. As the world is transformed by artificial intelligence, machine learning and automation, the uniquely human qualities of creativity, imagination, discernment and moral reasoning will be the ultimate coin of the realm. All these skills, as well as the ability to communicate clearly and persuasively, are honed in humanities courses."
"Ronald J. Daniels is the president of Johns Hopkins University. This op-ed is adapted from a letter to Hopkins students...
I would have also mentioned to the student who shunned the philosophy course that he was misinformed about the job market. It is true that many employers are looking for graduates with specialized technical skills, but they also look for other capabilities. As the world is transformed by artificial intelligence, machine learning and automation, the uniquely human qualities of creativity, imagination, discernment and moral reasoning will be the ultimate coin of the realm. All these skills, as well as the ability to communicate clearly and persuasively, are honed in humanities courses."
Subscribe to:
Posts (Atom)