Showing posts with label AI ethics codes. Show all posts
Showing posts with label AI ethics codes. Show all posts

Sunday, April 5, 2026

The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code; Observer, March 31, 2026

 , Observer; The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code

"Father Brendan McGuire is writing a novel about a disenchanted monk and his A.I. companion. He’s doing it with Claude. That detail—a Catholic priest using Anthropic’s chatbot to explore questions of faith and artificial consciousness—tells you something about where Silicon Valley’s moral reckoning has arrived. McGuire, 60, leads St. Simon Catholic Parish in Los Altos, Calif., a congregation that counts some of the Valley’s A.I. researchers among its members. Earlier this year, he and a group of faith leaders helped Anthropic shape the Claude Constitution, the set of guiding principles governing how its A.I. behaves.

He is not, in other words, an outside critic. He is something more complicated: a true believer in both God and technology, trying to hold them in the same hand. “I left the tech industry, but it never really left me,” McGuire told Observer...

McGuire wasn’t Anthropic’s only religious collaborator. Bishop Paul Tighe of the Vatican’s Dicastery for Culture and Education and Brian Patrick Green, a technology ethics director at Santa Clara University, also reviewed the Claude Constitution. Green and other Catholic scholars recently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”...

Anthropic says its engagement with religious voices—part of a broader effort to engage a wide variety of communities to keep pace with technological acceleration—is only a beginning. The company plans to expand outreach beyond Catholic institutions to other religious leaders going forward."

Tuesday, December 30, 2025

A code of ethics for AI in education; The Times of Israel, December 29, 2025

 Raz Frohlich, The Times of Israel; A code of ethics for AI in education

"Generative artificial intelligence is transforming every corner of our lives — how we communicate, create, work, and, inevitably, how we teach and learn. As educators, we cannot ignore its power, nor can we embrace it blindly. The rapid pace of AI innovation requires not only technical adaptation, but also deep ethical reflection.

As the largest education provider in Israel, at Israel Sci-Tech Schools (ISTS), we believe that, as AI becomes increasingly present in classrooms, we must ensure that human judgment, accountability, and responsibility remain at the center of education. That is why we are the first in Israel to create a Code of Ethics for Artificial Intelligence in Education. This is not just a policy document but an open invitation for discussion, learning, and shared responsibility across the education system.

This ethical code is not a technical manual, and it does not provide instant answers for daily classroom situations. Instead, it offers a holistic approach — a way of thinking, a framework for educators, students, and policymakers to use AI consciously and responsibly. It asks essential, core-value questions: How do we balance innovation with privacy? How do we ensure equality when access to technology is uneven? How do we maintain transparency when using AI? And when should we pause, reflect, and reconsider how we use AI in the classroom?

To develop the code, we drew from extensive global research and local experience. We consulted with ethicists, educators, technologists, psychologists, and legal experts — and, perhaps most importantly, we listened to students, teachers, and parents. Through roundtable discussions, they shared real concerns and insights about AI’s potential and its pitfalls. Those conversations shaped the code’s seven guiding principles, designed to help schools integrate AI ethically, transparently, and with respect for human dignity."

Thursday, October 4, 2018

The push to create AI-friendly ethics codes is stripping all nuance from morality; Quartz, October 4, 2018

Olivia Goldhill, Quartz; The push to create AI-friendly ethics codes is stripping all nuance from morality

"A paper led by Veljko Dubljević, neuroethics researcher at North Carolina State University, published yesterday (Oct. 2) in PLOS ONE, claims to establish not just the answer to one ethical question, but the entire groundwork for how moral judgements are made.

According to the paper’s “Agent Deed Consequence model,” three things are taken into account when making a moral decision: the person doing the action, the moral action itself, and the consequences of that action. To test this theory, the researchers created moral scenarios that varied details about the agent, the action, and the consequences."

Thursday, September 27, 2018

92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built; Forbes, September 26, 2018

John Koetsier, Forbes; 92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

""Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.""