"This year, for the first time, major AI conferences—the
gatekeepers for publishing research—are forcing computer scientists to
think about those consequences.
The Annual Conference on Neural Information Processing Systems will require a “broader impact statement” addressing the effect a piece of research might have on society. The Conference on Empirical Methods in Natural Language Processing will begin rejecting papers on ethical grounds. Others have emphasized their voluntary guidelines.
The
new standards follow the publication of several ethically dubious
papers. Microsoft collaborated with researchers at Beihang University to
algorithmically generate fake comments on news stories. Harrisburg University researchers developed a tool to predict the likelihood someone will commit a crime based on their face. Researchers clashed on Twitter over the wisdom of publishing these and other papers.
“The
research community is beginning to acknowledge that we have some level
of responsibility for how these systems are used,” says Inioluwa Raji, a
tech fellow at NYU’s AI Now Institute. Scientists have an obligation to
think about applications and consider restricting research, she says,
especially in fields like facial recognition with a high potential for
misuse."