Showing posts with label Trustworthy AI. Show all posts
Showing posts with label Trustworthy AI. Show all posts

Thursday, July 13, 2023

‘You can do both’: experts seek ‘good AI’ while attempting to avoid the bad; The Guardian, July 7, 2023

. The Guardian ; ‘You can do both’: experts seek ‘good AI’ while attempting to avoid the bad

"“We know how to make AI that people want, but we don’t know how to make AI that people can trust,” said Marcus.

The question of how to imbue AI with human values is sometimes referred to as “the alignment problem”, although it is not a neatly defined computational puzzle that can be resolved and implemented in law. This means that the question of how to regulate AI is a massive, open-ended scientific question – on top of significant commercial, social and political interests that need to be navigated...

“Mass discrimination, the black box problem, data protection violations, large-scale unemployment and environmental harms – these are the actual existential risks,” said Prof Sandra Wachter of the University of Oxford, one of the speakers at the summit. “We need to focus on these issues right now and not get distracted by hypothetical risks. This is a disservice to the people who are already suffering under the impact of AI.”"

Sunday, April 14, 2019

Europe's Quest For Ethics In Artificial Intelligence; Forbes, April 11, 2019

Andrea Renda, Forbes; Europe's Quest For Ethics In Artificial Intelligence

"This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced."