Showing posts with label Brad Smith. Show all posts
Showing posts with label Brad Smith. Show all posts

Thursday, July 18, 2024

The Future of Ethics in AI: A Global Conversation in Hiroshima; JewishLink, July 18, 2024

 Rabbi Dr. Ari Berman; JewishLink The Future of Ethics in AI: A Global Conversation in Hiroshima

"Last week, I had the honor of representing the Jewish people at the AI Ethics for Peace Conference in Hiroshima, Japan, a three day conversation of global faith, political and industry leaders. The conference was held to promote the necessity of ethical guidelines for the future of artificial intelligence. It was quite an experience.

During the conference, I found myself sitting down for lunch with a Japanese Shinto Priest, a Zen Buddhist monk and a leader of the Muslim community from Singapore. Our conversation could not have been more interesting. The developers who devised AI can rightfully boast of many accomplishments, and they can now count among them the unintended effect of bringing together people of diverse backgrounds who are deeply concerned about the future their creators will bring.

AI promises great potential benefits, including global access to education and healthcare, medical breakthroughs, and greater predictability that will lead to efficiencies and a better quality of life, at a level unimaginable just a few years ago. But it also poses threats to the future of humanity, including deepfakes, structural biases in algorithms, a breakdown of human connectivity, and the deterioration of personal privacy."

Thursday, January 18, 2018

In new book, Microsoft cautions humanity to develop AI ethics guidelines now; GeekWire, January 17, 2018

Monica Nickelsburg, GeekWire; 

In new book, Microsoft cautions humanity to develop AI ethics guidelines now


"This dangerous scenario is one of many posited in “The Future Computed,” a new book published by Microsoft, with a foreword by Brad Smith, Microsoft president and chief legal officer, and Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research group.

The book examines the use cases and potential dangers of AI technology, which will soon be integrated into many of the systems people use everyday. Microsoft believes AI should be developed with six core principles: “fair, reliable and safe, private and secure, inclusive, transparent, and accountable.”

Nimble policymaking and strong ethical guidelines are essential to ensuring AI doesn’t threaten equity or security, Microsoft says. In other words, we need to start planning now to avoid a scenario like the one facing the imaginary tech company looking for software engineers."