Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Tuesday, October 21, 2025

Staying Human in the Age of AI: November 6-7, 2025; The Grefenstette Center for Ethics, Duquesne University, November 6-7, 2025

The Grefenstette Center for Ethics, Duquesne University; Staying Human in the Age of AI: November 6-7, 2025

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Saturday, October 18, 2025

OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions; The New York Times, October 17, 2025

, The New York Times ; OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions


[Kip Currier: This latest tech company debacle is another example of breakdowns in technology design thinking and ethical leadership. No one in all of OpenAI could foresee that Sora 2.0 might be used in these ways? Or they did but didn't care? Either way, this is morally reckless and/or negligent conduct.

The leaders and design folks at OpenAI (and other tech companies) would be well-advised to look at Tool 6 in An Ethical Toolkit for Engineering/Design Practice, created by Santa Clara University Markkula Center for Applied Ethics:

Tool 6: Think About the Terrible People: Positive thinking about our work, as Tool 5 reminds us, is an important part of ethical design. But we must not envision our work being used only by the wisest and best people, in the wisest and best ways. In reality, technology is power, and there will always be those who wish to abuse that power. This tool helps design teams to manage the risks associated with technology abuse.

https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/

The "Move Fast and Break Things" ethos is alive and well in Big Tech.]


[Excerpt]

"OpenAI said Thursday that it was blocking people from creating videos using the image of the Rev. Dr. Martin Luther King Jr. with its Sora app after users created vulgar and racist depictions of him.

The company said it had made the decision at the request of the King Center as well as Dr. Bernice King, the civil rights leader’s daughter, who had objected to the videos.

The announcement was another effort by OpenAI to respond to criticism of its tools, which critics say operate with few safeguards.

“Some users generated disrespectful depictions of Dr. King’s image,” OpenAI said in a statement. “OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”"

Thursday, October 16, 2025

AI’s Copyright War Could Be Its Undoing. Only the US Can End It.; Bloomberg, October 14, 2025

 , Bloomberg; AI’s Copyright War Could Be Its Undoing. Only the US Can End It.

 "Whether creatives like Ulvaeus are entitled to any payment from AI companies is one of the sector’s most pressing and consequential questions. It’s being asked not just by Ulvaeus and fellow musicians including Elton John, Dua Lipa and Paul McCartney, but also by authors, artists, filmmakers, journalists and any number of others whose work has been fed into the models that power generative AI — tools that are now valued in the hundreds of billions of dollars."

Sunday, October 12, 2025

Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference; South Bend Tribune, October 9, 2025

Rayleigh Deaton, South Bend Tribune; Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference

"The increasingly ubiquitous nature of artificial intelligence in today's world raises questions about how the technology should be approached and who should be making the decisions about its development and implementation.

To the Rev. Paolo Benanti, an associate professor of ethics of AI at LUISS University and the AI adviser to the Vatican, and Aarti Singh, a professor in Carnegie Mellon University's Machine Learning Department, ethical AI use begins when the technology is used to better humanity, and this is done by making AI equitable and inclusive.

Benanti and Singh were panelists during a session on Wednesday, Oct. 8, at the University of Notre Dame's inaugural R.I.S.E. (Responsibility, Inclusion, Safety and Ethics) AI Conference. Hosted by the university's Lucy Family Institute for Data & Society, the conference ran Oct. 6-8 and focused on how AI can be used to address multidisciplinary societal issues while upholding ethical standards...

And, Singh said, promoting public AI awareness is vital. She said this is done through introducing AI training as early as elementary school and encouraging academics to develop soft skills to be able to communicate their AI research with laypeople — something they're not always good at.

"There are many programs being started now that are encouraging from the student level, but of course also faculty, in academia, to go out there and talk," Singh said. "I think the importance of doing that now is really crucial, and we should step up.""

Wednesday, October 8, 2025

What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators; The Conversation, October 8, 2025

Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University , The Conversation; What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators

"Imagine an actor who never ages, never walks off set or demands a higher salary.

That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agenciesHer synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.

But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites. 

The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human."

Wednesday, September 24, 2025

AI Influencers: Libraries Guiding AI Use; Library Journal, September 16, 2025

 Matt Enis, Library Journal ; AI Influencers: Libraries Guiding AI Use

"In addition to the field’s collective power, libraries can have a great deal of influence locally, says R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin and cohost of LJ’s Libraries Lead podcast.

“Right now, the place where librarians and libraries could have the most impact isn’t on trying to change OpenAI or Microsoft or Google; it’s really in looking at implementation policy,” Lankes says. For example, “on the public library side, many cities and states are adopting AI policies now, as we speak,” Lankes says. “Where I am in Austin, the city has more or less said, ‘go forth and use AI,’ and that has turned into a mandate for all of the city offices, which in this case includes the Austin Public Library” (APL). 

Rather than responding to that mandate by simply deciding how the library would use AI internally, APL created a professional development program to bring its librarians up to speed with the technology so that they can offer other city offices help with ways to use it, and advice on how to use it ethically and appropriately, Lankes explains.

“Cities and counties are wrestling with AI, and this is an absolutely perfect time for libraries to be part of that conversation,” Lankes says."

Tuesday, September 16, 2025

Ethical AI Design and Implementation: A Systematic Literature Review; AIS eLibrary, August 2025

 Katia Guerra, AIS eLibrary; Ethical AI Design and Implementation: A Systematic Literature Review

"Abstract

This study analyzes to what extent information systems (IS) research has investigated artificial intelligence (AI) applications and the ethical concerns that these applications pose in light of the EU AI Act and the recommendations and guidelines provided by other institutions, including the White House, UNESCO, OECD, and Université de Montréal. A systematic literature review methodology and a semantic text similarity analysis will be employed to conduct this investigation. The results of such investigation will lead to contributions to IS researchers by synthesizing previous IS studies on ethical AI design and implementation and proposing an agenda and future directions for IS research to make it more oriented toward the compliance of AI systems with current ethical provisions and considerations. This study will also help practitioners to be more aware of AI ethics and foster technical and managerial solutions that could be developed in compliance with current institutional ethical demands."

Wednesday, September 10, 2025

An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?; The Ethicist, The New York Times; September 10, 2025

, The Ethicist, The New York Times ; An Essay Contest Winner Used A.I. Should She Return the $1,000 Award?

[Kip Currier: This is a thought-provoking and timely ethical dilemma, especially with the proliferation of AI into more and more aspects of our personal and professional lives.

The question posed to The Ethicist in this edition of his column is about students submitting essays for a contest. The questioner wonders if the students have used AI to write their essays. The contest winners are awarded a monetary scholarship. The questioner wonders if they should confront the winners. The beauty of this question is that we don't know for sure whether AI was or was not used. It's totally speculative. What would you do?

Does your thinking change as to whether using AI to write something is ethical or unethical if: 

  • AI is used by a university professor to prepare a lecture
  • AI is used by a university professor to create an essay exam
  • AI is used by an elementary school teacher to prepare a lesson
  • AI is used by an elementary school teacher to create a multiple choice test
  • AI is used by your lawyer to write the legal brief for your lawsuit
  • AI is used by your lawyer's paralegal to write the legal brief for your lawsuit
  • AI is used to synthesize the court's verdict by the judge deciding your case
  • AI is used by a library director to compose the library's strategic plan
  • AI is used by a non-profit university to compose the university's strategic plan
  • AI is used by a for-profit company to compose the company's strategic plan
  • AI is used by a military branch to compose a strategy for military engagement
  • AI is used by a government agency to compose a strategy for national security
  • AI is used by local law enforcement to compose a strategy for public safety
  • AI is used by a summer camp to compose a strategy for camp safety
  • AI is used by your doctor to devise the treatment plan for your relative's cancer treatment
  • AI is used by a scientist to devise treatments for helping patients with cancer
  • AI is used to write a song for your significant other's birthday
  • AI is used to write a song for a musical you are creating
  • AI is used to write a song for a pharmaceutical company ad on TV
  • AI is used by your clergy head to write an annual report
  • AI is used by your clergy head to write a sermon
  • AI is used by your clergy head to write the eulogy for the funeral of one of your parents


Questions: Are you able to identify any variations in your ethical reasoning and how you decide your positions in the scenarios above?

What are you basing your decisions on? 

Are some scenarios easier or harder for you than others? If so, why?

In which situations, if any, do you think it is okay or not okay to use AI?

What additional information, if any, would you like to know that might help you to make decisions about whether and when the uses of AI are ethical or unethical?


[Excerpt]

I volunteer with our local historical society, which awards a $1,000 scholarship each year to two high school students who submit essays about a meaningful experience with a historical site. This year, our committee noticed a huge improvement in the quality of the students’ essays, and only after announcing the winners did we realize that one of them, along with other students, had almost certainly used artificial intelligence. What to do? I think our teacher liaison should be told, because A.I. is such a challenge for schools. I also feel that this winner should be confronted. If we are right, that might lead her to confess her dishonesty and return the award. — Name Withheld"

Monday, September 8, 2025

Philosophy Faculty Lead Ethical Conversations Surrounding AI; UCF Today, September 8, 2025

Anna Read, UCF Today ; Philosophy Faculty Lead Ethical Conversations Surrounding AI

"As artificial intelligence (AI) becomes increasingly integrated into everyday life, UCF’s Department of Philosophy has intentionally been strengthening faculty research in this area, as well as growing opportunities for students to learn more about the impact of technology on humans and the natural and social environments. A primary focus has been examining the ethical implications of AI and other emerging technologies.

Department Chair and Professor of Philosophy Nancy Stanlick emphasizes that understanding AI requires more than technical knowledge; it demands a deep exploration of ethics.

“As science and technology begin to shape more aspects of our lives, fundamental philosophical questions lie at the center of the ethical issues we face, especially with the rise of AI,” Stanlick says. “Perhaps the central [concern] is that it pulls us away from the essence of our humanity.”"

Friday, August 29, 2025

Medicare Will Require Prior Approval for Certain Procedures; The New York Times, August 28, 2025

Reed Abelson and  , The New York Times; Medicare Will Require Prior Approval for Certain Procedures


[Kip Currier: Does anyone who receives Medicare -- or cares about someone who does -- really think that letting AI make "prior approvals" for any Medicare procedures is a good thing?

Read the entire article, but just the money quote below should give any thinking person heart palpitations about this AI Medicare pilot project's numerous red flags and conflicts of interest...]


[Excerpt]

"The A.I. companies selected to oversee the program would have a strong financial incentive to deny claims. Medicare plans to pay them a share of the savings generated from rejections."

Monday, August 25, 2025

Medical triage as an AI ethics benchmark; Nature, August 22, 2025

, Nature; Medical triage as an AI ethics benchmark

"We present the TRIAGE benchmark, a novel machine ethics benchmark designed to evaluate the ethical decision-making abilities of large language models (LLMs) in mass casualty scenarios. TRIAGE uses medical dilemmas created by healthcare professionals to evaluate the ethical decision-making of AI systems in real-world, high-stakes scenarios. We evaluated six major LLMs on TRIAGE, examining how different ethical and adversarial prompts influence model behavior. Our results show that most models consistently outperformed random guessing, with open source models making more serious ethical errors than proprietary models. Providing guiding ethical principles to LLMs degraded performance on TRIAGE, which stand in contrast to results from other machine ethics benchmarks where explicating ethical principles improved results. Adversarial prompts significantly decreased accuracy. By demonstrating the influence of context and ethical framing on the performance of LLMs, we provide critical insights into the current capabilities and limitations of AI in high-stakes ethical decision making in medicine."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Monday, August 11, 2025

Lost in the wild? AI could find you; Axios, August 10, 2025

"Hikers stranded in remote areas with no cell service or WiFi might have a new lifeline: AI.

The big picture: AI is helping some rescue teams find missing people faster by scanning satellite and drone images.


Zoom in: "AI's contribution is that it can dramatically reduce the time to process imagery and do it more accurately than humans," David Kovar, director of advocacy for NASAR and CEO of cybersecurity company URSA Inc., tells Axios.


Context: It's just one of many resources rescue teams use to help them, Kovar stresses.


AI already is eerily good at geolocating where photos are taken.


  • Last month, the body of a hiker lost for nearly a year was found in Italy in a matter of hours after The National Alpine and Speleological Rescue Corps used AI to analyze a series of drone images.

The intrigue: We also know when people are given the option to share their location as a safety measure, they do it.

What's next: AI agents could be trained to fly drones via an automated system. It's a theory Jan-Hendrik Ewers made the subject of his PhD at the University of Glasgow. 


  • "You could have a fully automated system that monitors reports and triggers drone-based search efforts before a human has lifted a finger," Ewers tells Axios.

  • Barriers to implementing this kind of system are many: money, politics and the fact that when lives are at stake, relying on experimental AI could complicate efforts. 

The other side: Some lost people don't want to be found. And, lost people can't consent.


  • Nearly everyone will want this help, but "there will be cases where, for example, a person who is a victim of domestic violence says she's going out hiking, but she's not. She's not intending to come back," Greg Nojeim, senior counsel and director for Democracy & Technology's Security and Surveillance Project tells Axios.

AI ethics depend on the circumstances, and who is using it, William Budington, senior staff technologist at nonprofit advocacy organization Electronic Frontier Foundation, tells Axios.


  • If it's used to save lives and private data used in a rescue operation is wiped after a hiker is found, there is less of a concern, he says.

  • "But, using it to scan images or locate and surveil people, especially those that don't want to be found — either just for privacy reasons, or political dissidents, perhaps — that's a worrying possibility."

Friday, July 25, 2025

Virginia teachers learn AI tools and ethics at largest statewide workshop; WTVR, July 23, 2025

 

Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind; Fortune, July 24, 2025

 SHARON GOLDMAN, Fortune; Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind

"For the “accelerationists”—those who believe the rapid development and deployment of artificial intelligence should be pursued as quickly as possible—innovation, scale, and speed are everything. Over-caution and regulation? Ill-conceived barriers that will actually cause more harm than good. They argue that faster progress will unlock massive economic growth, scientific breakthroughs, and national advantage. And if superintelligence is inevitable, they say, the U.S. had better get there first—before rivals like China’s authoritarian regime.

AI ethics and safety has been sidelined

This worldview, articulated by Marc Andreessen in his 2023 blog post, has now almost entirely displaced the diverse coalition of people who worked on AI ethics and safety during the Biden Administration—from mainstream policy experts focused on algorithmic fairness and accountability, to the safety researchers in Silicon Valley who warn of existential risks. While they often disagreed on priorities and tone, both camps shared the belief that AI needed thoughtful guardrails. Today, they find themselves largely out of step with an agenda that prizes speed, deregulation, and dominance.

Whether these groups can claw their way back to the table is still an open question. The mainstream ethics folks—with roots in civil rights, privacy, and democratic governance—may still have influence at the margins, or through international efforts. The existential risk researchers, once tightly linked to labs like OpenAI and Anthropic, still hold sway in academic and philanthropic circles. But in today’s environment—where speed, scale, and geopolitical muscle set the tone—both camps face an uphill climb. If they’re going to make a comeback, I get the feeling it won’t be through philosophical arguments. More likely, it would be because something goes wrong—and the public pushes back."

Wednesday, July 23, 2025

Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases; Above The Law, July 23, 2025

Joe Patrice , Above The Law; Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases

"Don’t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might’ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.

Malaty will shoulder most of the blame as the link in the workflow who should’ve known better. That said, her article about AI ethics, written last year, doesn’t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations."

Tuesday, July 22, 2025

Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025

 ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague

"The Limits of GenAI’s Simulated Humanity

  • Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
  • True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
  • Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.  
  • Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
  • Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
  • Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."

Thursday, July 17, 2025

Hot Days, Hotter Topics | ALA Annual 2025; Library Journal, July 9, 2025

Matt Enis, Lisa Peet, Hallie Rich, & Kara Yorio , Library Journal; Hot Days, Hotter Topics | ALA Annual 2025

"This year’s American Library Association (ALA) Annual Conference, held from June 26–30 in Philadelphia, drew 14,250 participants: librarians and library staff, authors, publishers, educators, and exhibitors, including 165 international members. While still not up to pre-pandemic attendance levels, the conference was—by all accounts—buzzing and busy, with well-attended sessions and a bustling exhibit floor.

Even with temperatures topping 90˚, Philly wasn’t the only hot aspect of the conference. A cluster of topics seemed to be at the center of nearly every discussion: how libraries would cope in the face of current or anticipated budget cuts, the impacts of ongoing attacks on the freedom to read and DEI, the ramping up of ICE and police surveillance, the dismantling of the Institute of Museum and Library Services (IMLS) and firing of Librarian of Congress Dr. Carla Hayden, and the uses and ethics of artificial intelligence (AI)."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

, Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."