The unnatural ethics of AI could be its undoing , The Outline;
"When I used to teach philosophy at universities, I always resented
having to cover the Trolley Problem, which struck me as everything the
subject should not be: presenting an extreme situation, wildly detached
from most dilemmas the students would normally face, in which our agency
is unrealistically restricted, and using it as some sort of ideal model
for ethical reasoning (the first model of ethical reasoning that many
students will come across, no less). Ethics should be about things like
the power structures we enter into at work, what relationships we decide
to pursue, who we are or want to become — not this fringe-case
intuition-pump nonsense.
But maybe I’m wrong. Because, if we
believe tech gurus at least, the Trolley Problem is about to become of
huge real-world importance. Human beings might not find themselves in
all that many Trolley Problem-style scenarios over the course of their
lives, but soon we're going to start seeing self-driving cars on our
streets, and they're going to have to make these judgments all the time.
Self-driving cars are potentially going to find themselves in all sorts
of accident scenarios where the AI controlling them has to decide which
human lives it ought to preserve. But in practice what this means is
that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...
I'm much more sympathetic to the “AI is bad” line. We have little reason
to trust that big tech companies (i.e. the people responsible for
developing this technology) are doing it to help us, given how wildly
their interests diverge from our own."
Until that day arrives, Grosz, the Higgins Professor of Natural
Sciences at the Harvard John A. Paulson School of Engineering and
Applied Sciences (SEAS), is working to instill in the next generation of
computer scientists a mindset that considers the societal impact of
their work, and the ethical reasoning and communications skills to do
so.