Showing posts with label whose values should be programmed into AI. Show all posts
Showing posts with label whose values should be programmed into AI. Show all posts

Friday, April 22, 2016

How to Be Good: Why you can’t teach human values to artificial intelligence; Slate, 4/20/16

Adam Elkus, Slate; How to Be Good: Why you can’t teach human values to artificial intelligence:
"As Collins pointed out, computers acquire human knowledge and abilities from the fact that they are embedded in human social contexts. A Japanese elder care personal robot, for example, is only able to act in a way acceptable to Japanese senior citizens because its programmers understand Japanese society. So talk of machines and human knowledge, values, and goals is frustratingly circular.
Which brings us back to Russell’s optimistic assumptions that computer scientists can sidestep these social questions through superior algorithms and engineering efforts. Russell is an engineer, not a humanities scholar. When he talks about “tradeoffs” and “value functions,” he assumes that a machine ought to be an artificial utilitarian. Russell also suggests that machines ought to learn a cross-section of human values from human cultural and media products. So does that mean a machine could learn about American race relations by watching the canonical pro-Ku Klux Klan and pro-Confederacy film The Birth of a Nation?
But Russell’s biggest problem lies in the very much “values”-based question of whose values ought to determine the values of the machine. One does not imagine too much overlap between hard-right Donald Trump supporters and hard-left Bernie Sanders supporters on some key social and political questions, for example. And the other (artificial) elephant in the room is the question of what gives Western, well-off, white male cisgender scientists such as Russell the right to determine how the machine encodes and develops human values, and whether or not everyone ought to have a say in determining the way that Russell’s hypothetical A.I. makes tradeoffs...
The harder problem is the thorny question of which humans ought to have the social, political, and economic power to make A.I. obey their values, and no amount of data-driven algorithms is going to solve it."