Calum Chace, Forbes; Does AI Ethics Have A Bad Name?
"One possible downside is that people
outside the field may get the impression that some sort of moral agency
is being attributed to the AI, rather than to the humans who develop AI
systems. The AI we have today is narrow AI: superhuman in certain
narrow domains, like playing chess and Go, but useless at anything else.
It makes no more sense to attribute moral agency to these systems than
it does to a car or a rock. It will probably be many years before we
create an AI which can reasonably be described as a moral agent...
The issues explored in the field of AI
ethics are important but it would help to clarify them if some of the
heat was taken out of the discussion. It might help if instead of
talking about AI ethics, we talked about beneficial AI and AI safety.
When an engineer designs a bridge she does not finish the design and
then consider how to stop it from falling down. The ability to remain
standing in all foreseeable circumstances is part of the design
criteria, not a separate discipline called “bridge ethics”. Likewise, if
an AI system has deleterious effects it is simply a badly designed AI
system.
Interestingly, this change has already
happened in the field of AGI research, the study of whether and how to
create artificial general intelligence, and how to avoid the potential
downsides of that development, if and when it does happen. Here,
researchers talk about AI safety. Why not make the same move in the
field of shorter-term AI challenges?"
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label beneficial AI. Show all posts
Showing posts with label beneficial AI. Show all posts
Subscribe to:
Posts (Atom)