Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Sunday, June 22, 2025

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican; CNN, June 20, 2025

 and , CNN ; Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

"Pope Leo XIV says tech companies developing artificial intelligence should abide by an “ethical criterion” that respects human dignity.

AI must take “into account the well-being of the human person not only materially, but also intellectually and spiritually,” the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives.

“No generation has ever had such quick access to the amount of information now available through AI,” he said. But “access to data — however extensive — must not be confused with intelligence.”

He also expressed concern about AI’s impact on children’s “intellectual and neurological development,” writing that “society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities.”

That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See.

The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobsproduce misinformationworsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulationsintended to ensure that AI is used responsibly, which they say could hinder innovation and global competition.

“In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,” Leo said in his Friday statement."

Thursday, June 19, 2025

AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries; The Conversation, June 17, 2025

 Professor of Philosophy and Director, Applied Ethics Center, UMass BostonSenior Research Fellow, Applied Ethics Center, UMass Boston; The Conversation; AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries

"The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic.

Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes."

Monday, June 9, 2025

BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use; The Hollywood Reporter, June 8, 2025

Georg Szalai, The Hollywood Reporter; BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use

"A new report published on Monday by the British Film Institute (BFI) sets out nine recommendations for the U.K. screen sector to ensure that artificial intelligence will be a boon rather than bane for film and TV. 

“AI in the Screen Sector: Perspectives and Paths Forward” analyzes current usage and experimentation with “rapidly evolving generative artificial intelligence (AI) technologies,” the BFI said. “To ensure that the U.K. remains a global leader in screen production and creative innovation, the report sets out a roadmap of key recommendations to support the delivery of ethical, sustainable, and inclusive AI integration across the sector.”"

5 Dangerous Myths About AI Ethics You Shouldn’t Believe; Forbes, May 14, 2025

Bernard Marr , Forbes; 5 Dangerous Myths About AI Ethics You Shouldn’t Believe

"AI can empower just about any business to innovate and drive efficiency, but it also has the potential to do damage and cause harm. This means that everyone putting it to use needs to understand the ethical frameworks in place to keep everyone safe.

At the end of the day, AI is a tool. AI ethics can be thought of as the safety warning you get in big letters at the front of any user manual, setting out some firm dos and don’ts about using it.

Using AI almost always involves making ethical choices. In a business setting, understanding the many ways it can affect people and culture means we have the best information for making those choices.

It’s a subject there's still a lot of confusion around, not least involving who is responsible and who should be ensuring this gets done. So here are five common misconceptions I come across involving the ethics of generative AI and machine learning."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

 , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."

Sunday, May 18, 2025

Have journalists skipped the ethics conversation when it comes to using AI?; The Conversation, May 13, 2025

Assistant Professor, School of Journalism, Toronto Metropolitan UniversityProfessor emerita/adjunct professor, Toronto Metropolitan University School of Journalism, Toronto Metropolitan UniversityAssociate Professor, Journalism, Toronto Metropolitan University , The Conversation; Have journalists skipped the ethics conversation when it comes to using AI?

"Artificial intelligence (AI) is being used in journalistic work for everything from transcribing interviews and translating articlesto writing and publishing local weathereconomic reports and water quality stories.

It’s even being used to identify story ideas from the minutes of municipal council meetings in cases where time-strapped reporters don’t have time to do so. 

What’s lagging behind all this experimentation are the important conversations about the ethics of using these tools. This disconnect was evident when we interviewed journalists in a mix of newsrooms across Canada from July 2022 to July 2023, and it remains a problem today. 

We conducted semi-structured interviews with 13 journalists from 11 Canadian newsrooms. Many of the people we spoke to told us that they had worked at multiple media organizations throughout their careers.

The key findings from our recently published research:"

Thursday, May 15, 2025

Top Priority for Pope Leo: Warn the World of the A.I. Threat; The New York Times, May 15, 2025

Motoko Rich and  , The New York Times; Top Priority for Pope Leo: Warn the World of the A.I. Threat

"Less than a week into the role, Leo XIV has publicly highlighted his concerns about the rapidly advancing technology. In his inaugural address to the College of Cardinals, he said the church would address the risks that artificial intelligence poses to “human dignity, justice and labor.” And in his first speech to journalists, he cited the “immense potential” of A.I. while warning that it requires responsibility “to ensure that it can be used for the good of all.”

While it is far too early to say how Pope Leo will use his platform to address these concerns or whether he can have much effect, his focus on artificial intelligence shows he is a church leader who grasps the gravity of this modern issue.

Paolo Benanti, a Franciscan friar, professor and the Vatican’s top adviser on the ethics of artificial intelligence, said he was surprised by Leo’s “bold” priorities. Father Benanti remembers that just 15 years ago, when he told his doctoral advisers that he wanted to study cyborgs and human enhancement at the Gregorian, the pontifical university where he now teaches, his advisers thought he was nuts."

Wednesday, April 30, 2025

Is Dignity a Bad Idea for AI Ethics? Responding to Dignity’s Critics; Markkula Center for Applied Ethics, April 29, 2025

Brian Patrick Green is the director of technology ethics at the Markkula Center for Applied Ethics. Views are his own., Markkula Center for Applied Ethics; Is Dignity a Bad Idea for AI Ethics? Responding to Dignity’s Critics

"The word “dignity” and the various concepts it represents are foundational ideas for international human rights discourse and other ethical systems that protect individuals against each other and the power of states. Dignity can be implicitly included in these discourse, as in the United States Declaration of Independence in 1776–“We hold these truths to be self evident that all men are created equal …” –or explicitly, as in the United Nations Universal Declaration of Human Rights in 1948–“Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice, and peace in the world …” Dignity helps form the groundwork not only for the protection of individuals, but also, via the UN Charter(where it is in the second line), for the rules-based international order since World War II. Practically-speaking, “dignity” helps the world-go-round, at least in a political way, and that way seems better than some of the alternatives, like a world where human dignity is not internationally acknowledged, such as prior to World War II (where the 1919 Covenant of the League of Nations sought to achieve “peace and security” but not dignity or rights).

However, there are some thinkers who do not like the concept of dignity. A recent article titled, “Why dignity is a troubling concept for AI ethics,” suggests that AI ethics should not use the word dignity any more [1]. I find the article to have several serious problems."

The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts; Associated Press via Inc., April 28, 2025

Associated Press via Inc.; The Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts 

"In the White House and the Republican-led Congress, “woke AI” has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to “advance equity” in AI development and curb the production of “harmful and biased outputs” are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.

And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and “responsible AI” in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on “reducing ideological bias” in a way that will “enable human flourishing and economic competitiveness,” according to a copy of the document obtained by The Associated Press.

In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.

But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive."

Tuesday, April 29, 2025

How to Avoid Ethical Red Flags in Your AI Projects; IEEE Spectrum, April 27, 2025

 , IEEE Spectrum; 

How to Avoid Ethical Red Flags in Your AI Projects 

IBM ethics expert Francesca Rossi shares her advice


"For AI solutions raising ethical red flags, we have an internal review process that may lead to modifications. Our assessment extends beyond the technology’s properties (fairness, explainability, privacy) to how it’s deployed. Deployment can either respect human dignity and agency or undermine it. We conduct risk assessments for each technology use case, recognizing that understanding risk requires knowledge of the context in which the technology will operate. This approach aligns with the European AI Act’s framework—it’s not that generative AI or machine learning is inherently risky, but certain scenarios may be high or low risk. High-risk use cases demand additional scrutiny.

In this rapidly evolving landscape, responsible AI engineering requires ongoing vigilance, adaptability, and a commitment to ethical principles that place human well-being at the center of technological innovation."

The Mirror Trap: AI Ethics And The Collapse Of Human Imagination; Forbes, April 27, 2025

Jason Snyder, Forbes; The Mirror Trap: AI Ethics And The Collapse Of Human Imagination

"The Crisis - Imagination and AI Ethics

We are not racing toward a future of artificial intelligence—we are disappearing into a hall of mirrors. The machines that many of us believed would expand reality are erasing it. What we call progress is merely a smoother reflection of ourselves, stripped of our rough edges, originality, and imagination, raising urgent questions for the future of AI ethics.

AI doesn’t innovate; it imitates. It doesn’t create; it converges.

Today, we don’t have artificial intelligence; we have artificial inference, where machines remix data, and humans provide the intelligence. And as we keep polishing the mirrors, we aren’t just losing originality—we are losing ownership of who we are.

The machines won’t need to replace us. We will surrender to their reflection, mistaking its perfection for our purpose. We’re not designing intelligence—we’re designing reflections. And in those reflections, we’re losing ourselves. We think we’re building machines to extend us. But more often, we’re building machines that imitate us—smoother, safer, simpler versions of ourselves."

Sunday, April 27, 2025

Ask the ethicist: How to create guardrails for the AI age; WBUR, April 25, 2025

 

Ask the ethicist: How to create guardrails for the AI age

"Will AI devastate humanity or uplift it? Philosopher Christopher DiCarlo's new book examines how we can navigate when AI surpasses human capacity.

Guest

Christopher DiCarlo, philosopher, educator and ethicist who teaches in Philosophy Department at the University of Toronto. Author of "Building a God: The Ethics of Artificial Intelligence and the Race to Control It.""

Thursday, April 17, 2025

The philosopher’s machine: my conversation with Peter Singer’s AI chatbot; The Guardian, April 17, 2025

 , The Guardian; The philosopher’s machine: my conversation with Peter Singer’s AI chatbot

"Singer has had a long and high-profile career in philosophy. He has only recently retired from Princeton University, where he was professor of bioethics. His CV is extensive (it runs to a full 102 pages). He has a Substack, a podcast, a Bluesky account, a journal he co-edits, and a polished online presence. As I’m scrolling through the storied septuagenarian philosopher’s blog, the chatbot pops up in the corner of the page. I can’t help but feeling like I am being sold something.

In launching his chatbot, Singer described the tool as “employing the Socratic method” – that is, dialogue – to “[guide] users through a process of critical thinking and self-examination”. But I have questions about that.

Me: Do you think it is philosophically justifiable to outsource the discussion of ethics to a chatbot?

Peter Singer AI: Outsourcing ethical discussions to a chatbot can be philosophically justifiable if viewed as a tool to enhance and facilitate ethical reflection. Chatbots can provide insights, pose questions, and offer perspectives that might help individuals think more deeply about ethical issues.

However, it’s crucial to remember that chatbots lack human experience, empathy, and the ability to understand context fully. They should be used as a complement to human judgment and conversation, not a replacement.

Engaging with a chatbot can stimulate thinking and provide a starting point for deeper exploration, but ultimately, ethical decision-making involves personal reflection and discussion with others...

My most memorable and exciting philosophy lessons were the ones in which, as a class, we hotly debated ideas in which we were all materially and emotionally invested. The AI floats loftily above the emotional fray, coolly dispensing pre-programmed wisdom from the cloud, while we grapple with the contradictions and complications of actually living. Between the chatbot and me, there is only one of us for whom the dialogue elicits an emotional response, because for only one of us does the conversation actually matter."

Tuesday, February 18, 2025

AI and ethics: No advancement can ever justify a human rights violation; Vatican News, February 16, 2025

Kielce Gussie, Vatican News; AI and ethics: No advancement can ever justify a human rights violation

"By 2028, global spending on artificial intelligence will skyrocket to $632 billion, according to the International Data Corporation. In a world where smartphones, computers, and ChatGPT continue to be the center of debate, it's no wonder the need for universal regulation and awareness has become a growing topic of discussion.

To address this issue, an international two-day summit focused on AI was held in Paris, France. The goal was to bring stakeholders from the public, private, and academic sectors together to begin building an AI ecosystem that is trustworthy and safe.

Experts in various areas of the artificial intelligence sphere gathered to partake in the discussion, including Australian professor and member of the Australian Government’s Artificial Intelligence Expert Group, Edward Santow. He described feeling hopeful that the summit would advance the safety agenda of AI.

Trustworthiness and safety

On the heels of this summit, the Australian Embassy to the Holy See hosted a panel discussion to address the ethical and human rights challenges in utilizing AI. There, Prof. Santow described his experience at the Paris summit, highlighting the difficulty in building an atmosphere of trust with AI on a global scale."

Wednesday, February 12, 2025

As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI;TechCrunch, February 11, 2025

Romain Dillet, TechCrunch ; As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI

"The Artificial Intelligence Action Summit in Paris was supposed to culminate with a joint declaration on artificial intelligence signed by dozens of world leaders. While the statement isn’t as ambitious as the Bletchley and Seoul declarations, both the U.S. and the U.K. have refused to sign it.

It proves once again that it is difficult to reach a consensus around artificial intelligence — and other topics — in the current (fraught) geopolitical context.

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” U.S. vice president, JD Vance, said in a speech during the summit’s closing ceremony.


“The United States of America is the leader in AI, and our administration plans to keep it that way,” he added.


In all, 61 countries — including China, India, Japan, Australia, and Canada — have signed the declaration that states a focus on “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy.” It also calls for greater collaboration when it comes to AI governance, fostering a “global dialogue.”

Early reactions have expressed disappointment over a lack of ambition."

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"


Sunday, February 9, 2025

The AI War on Normal People (with Andrew Yang); The Bulwark, February 9, 2025

John Avon , The Bulwark; The AI War on Normal People (with Andrew Yang)

"The Founding Fathers were aware that yawning gaps between rich and poor destabilize a society. And with AI driving ever greater income inequality while it eats through American jobs—blue-collar, white-collar, and the kind of work in professional services firms that college grads have trained for— our country’s leaders should be responding to the reality that is already upon us. Andrew Yang has been warning for years about the inevitable impacts of AI on our economy and our democracy, and he joins John to discuss possible solutions, including universal basic income and child tax credits.

Andrew Yang joins John Avlon"

Friday, February 7, 2025

Franciscan expert on artificial intelligence addresses its ethical challenges; Catholic News Agency, January 17, 2025

Nicolás de Cárdenas, Catholic News Agency; Franciscan expert on artificial intelligence addresses its ethical challenges

"Franciscan friar Paolo Benanti, an expert in artificial intelligence (AI), warned of its ethical risks during a colloquium organized by the Paul VI Foundation in Madrid, pointing out that “the people who control this type of technology control reality.”

The Italian priest, president of the Italian government’s Commission for Artificial Intelligence, emphasized that “the reality we are facing is different from that of 10 or 15 years ago and it’s a reality defined by software.”

“This starting point has an impact on the way in which we exercise the three classic rights connected with the ownership of a thing: use, abuse, and usufruct,” he explained. (The Cambridge Dictionary defines usufruct as “the legal right to use someone else’s property temporarily and to keep any profit made from it.”)...

Regarding the future, Benanti predicted artificial intelligence will have a major impact on access to information, medicine, and the labor market. Regarding the latter, he noted: “If we do not regulate the impact that artificial intelligence can have on the labor market, we could destroy society as we now know it.

This story was first published by ACI Prensa, CNA’s Spanish-language news partner. It has been translated and adapted by CNA."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."