It’s not easy to encourage the ethical use of artificial intelligence. But here are 10 recommendations.
"The Recommendations
Transparency Companies should be transparent about the design, intention and use of their A.I. technology.
Disclosure Companies should clearly disclose to users what data is being collected and how it is being used.
Privacy Users should be able to easily opt out of data collection.
Diversity A.I. technology should be developed by inherently diverse teams.
Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
Trust
Organizations should have internal processes to self-regulate the
misuse of A.I. Have a chief ethics officer, ethics board, etc.
Accountability
There should be a common set of standards by which companies are held
accountable for the use and impact of their A.I. technology.
Collective governance Companies should work together to self-regulate the industry.
Regulation Companies should work with regulators to develop appropriate laws to govern the use of A.I.
“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.
The leaders of the groups: Frida
Polli, a founder and chief executive, Pymetrics; Sara Menker, founder
and chief executive, Gro Intelligence; Serkan Piantino, founder and
chief executive, Spell; Paul Scharre, director, Technology and National
Security Program, The Center for a New American Security; Renata
Quintini, partner, Lux Capital; Ken Goldberg, William S. Floyd Jr.
distinguished chair in engineering, University of California, Berkeley;
Danika Laszuk, general manager, Betaworks Camp; Elizabeth Joh, Martin
Luther King Jr. Professor of Law, University of California, Davis;
Candice Morgan, head of inclusion and diversity, Pinterest"
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.