Showing posts with label AI researchers. Show all posts
Showing posts with label AI researchers. Show all posts

Thursday, April 30, 2020

AI researchers propose ‘bias bounties’ to put ethics principles into practice; VentureBeat, April 17, 2020

Khari Johnson, VentureBeat; AI researchers propose ‘bias bounties’ to put ethics principles into practice

"Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software.

This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say."

Tuesday, December 11, 2018

When algorithms go wrong we need more power to fight back, say AI researchers; The Verge, Deecember 8, 2018

James Vincent, The Verge; When algorithms go wrong we need more power to fight back, say AI researchers

"Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.

The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations, including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services."