Showing posts with label AI for good. Show all posts
Showing posts with label AI for good. Show all posts

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Thursday, July 13, 2023

‘You can do both’: experts seek ‘good AI’ while attempting to avoid the bad; The Guardian, July 7, 2023

. The Guardian ; ‘You can do both’: experts seek ‘good AI’ while attempting to avoid the bad

"“We know how to make AI that people want, but we don’t know how to make AI that people can trust,” said Marcus.

The question of how to imbue AI with human values is sometimes referred to as “the alignment problem”, although it is not a neatly defined computational puzzle that can be resolved and implemented in law. This means that the question of how to regulate AI is a massive, open-ended scientific question – on top of significant commercial, social and political interests that need to be navigated...

“Mass discrimination, the black box problem, data protection violations, large-scale unemployment and environmental harms – these are the actual existential risks,” said Prof Sandra Wachter of the University of Oxford, one of the speakers at the summit. “We need to focus on these issues right now and not get distracted by hypothetical risks. This is a disservice to the people who are already suffering under the impact of AI.”"