Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Tuesday, February 18, 2025

AI and ethics: No advancement can ever justify a human rights violation; Vatican News, February 16, 2025

Kielce Gussie, Vatican News; AI and ethics: No advancement can ever justify a human rights violation

"By 2028, global spending on artificial intelligence will skyrocket to $632 billion, according to the International Data Corporation. In a world where smartphones, computers, and ChatGPT continue to be the center of debate, it's no wonder the need for universal regulation and awareness has become a growing topic of discussion.

To address this issue, an international two-day summit focused on AI was held in Paris, France. The goal was to bring stakeholders from the public, private, and academic sectors together to begin building an AI ecosystem that is trustworthy and safe.

Experts in various areas of the artificial intelligence sphere gathered to partake in the discussion, including Australian professor and member of the Australian Government’s Artificial Intelligence Expert Group, Edward Santow. He described feeling hopeful that the summit would advance the safety agenda of AI.

Trustworthiness and safety

On the heels of this summit, the Australian Embassy to the Holy See hosted a panel discussion to address the ethical and human rights challenges in utilizing AI. There, Prof. Santow described his experience at the Paris summit, highlighting the difficulty in building an atmosphere of trust with AI on a global scale."

Wednesday, February 12, 2025

As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI;TechCrunch, February 11, 2025

Romain Dillet, TechCrunch ; As US and UK refuse to sign the Paris AI Action Summit statement, other countries commit to developing ‘open, inclusive, ethical’ AI

"The Artificial Intelligence Action Summit in Paris was supposed to culminate with a joint declaration on artificial intelligence signed by dozens of world leaders. While the statement isn’t as ambitious as the Bletchley and Seoul declarations, both the U.S. and the U.K. have refused to sign it.

It proves once again that it is difficult to reach a consensus around artificial intelligence — and other topics — in the current (fraught) geopolitical context.

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” U.S. vice president, JD Vance, said in a speech during the summit’s closing ceremony.


“The United States of America is the leader in AI, and our administration plans to keep it that way,” he added.


In all, 61 countries — including China, India, Japan, Australia, and Canada — have signed the declaration that states a focus on “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy.” It also calls for greater collaboration when it comes to AI governance, fostering a “global dialogue.”

Early reactions have expressed disappointment over a lack of ambition."

Monday, February 10, 2025

UNESCO Holds Workshop on AI Ethics in Cuba; UNESCO, February 7, 2025

UNESCO; UNESCO Holds Workshop on AI Ethics in Cuba

"During the joint UNESCO-MINCOM National Workshop "Ethics of Artificial Intelligence: Equity, Rights, Inclusion" in Havana, the results of the application of the Readiness Assessment Methodology (RAM) for the ethical development of AI in Cuba were presented.

Similarly, there was a discussion on the Ethical Impact Assessment (EIA), a tool aimed at ensuring that AI systems follow ethical rules and are transparent...

The meeting began with a video message from the Assistant Director-General for Social and Human Sciences, Gabriela Ramos, who emphasized that artificial intelligence already has a significant impact on many aspects of our lives, reshaping the way we work, learn, and organize society.

Technologies can bring us greater productivity, help deliver public services more efficiently, empower society, and drive economic growth, but they also risk perpetuating global inequalities, destabilizing societies, and endangering human rights if they are not safe, representative, and fair, and above all, if they are not accessible to everyone.

Gabriela RamosAssistant Director-General for Social and Human Sciences"


Sunday, February 9, 2025

The AI War on Normal People (with Andrew Yang); The Bulwark, February 9, 2025

John Avon , The Bulwark; The AI War on Normal People (with Andrew Yang)

"The Founding Fathers were aware that yawning gaps between rich and poor destabilize a society. And with AI driving ever greater income inequality while it eats through American jobs—blue-collar, white-collar, and the kind of work in professional services firms that college grads have trained for— our country’s leaders should be responding to the reality that is already upon us. Andrew Yang has been warning for years about the inevitable impacts of AI on our economy and our democracy, and he joins John to discuss possible solutions, including universal basic income and child tax credits.

Andrew Yang joins John Avlon"

Friday, February 7, 2025

Franciscan expert on artificial intelligence addresses its ethical challenges; Catholic News Agency, January 17, 2025

Nicolás de Cárdenas, Catholic News Agency; Franciscan expert on artificial intelligence addresses its ethical challenges

"Franciscan friar Paolo Benanti, an expert in artificial intelligence (AI), warned of its ethical risks during a colloquium organized by the Paul VI Foundation in Madrid, pointing out that “the people who control this type of technology control reality.”

The Italian priest, president of the Italian government’s Commission for Artificial Intelligence, emphasized that “the reality we are facing is different from that of 10 or 15 years ago and it’s a reality defined by software.”

“This starting point has an impact on the way in which we exercise the three classic rights connected with the ownership of a thing: use, abuse, and usufruct,” he explained. (The Cambridge Dictionary defines usufruct as “the legal right to use someone else’s property temporarily and to keep any profit made from it.”)...

Regarding the future, Benanti predicted artificial intelligence will have a major impact on access to information, medicine, and the labor market. Regarding the latter, he noted: “If we do not regulate the impact that artificial intelligence can have on the labor market, we could destroy society as we now know it.

This story was first published by ACI Prensa, CNA’s Spanish-language news partner. It has been translated and adapted by CNA."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Wednesday, January 29, 2025

The Vatican urges ethical AI use in warfare and healthcare; Quartz, January 29, 2025

 Quartz Intelligence Newsroom, Quartz; The Vatican urges ethical AI use in warfare and healthcare

"This story incorporates reporting from  AngelusCatholic News Agency and The New York Times.


The Vatican has released a comprehensive document offering new guidelines for the ethical development and use of artificial intelligence, with a focus on areas such as warfare and healthcare...

Ultimately, the Vatican’s guidelines encourage deeper engagement with the humanities, suggesting that AI’s rise should inspire renewed interest in understanding and valuing the human condition. This approach positions AI as a tool for enhancing, not diminishing, human creativity, empathy, and moral responsibility. Through continued dialogue and regulation, the Vatican hopes to steer AI development towards a future that aligns with ethical and spiritual values."

Monday, January 27, 2025

Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines; WVU Today, January 22, 2025

WVU Today; Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines

"Two West Virginia University researchers have designed a curriculum to engage liberal arts faculty in discussions on the social, ethical and technical aspects of artificial intelligence and its role in classrooms.

Through a grant from the National Endowment for the Humanities, Erin Brock Carlson, assistant professor of English, and Scott Davidson, professor of philosophy, both at the WVU Eberly College of Arts and Sciences, have designed an interdisciplinary, cross-institutional program to facilitate conversations among faculty about the benefits and drawbacks of AI, how it functions and the need for human interpretation.

The award will fund a summer workshop in which Carlson and Davidson will offer AI trainings for humanities faculty and guide them through creation and development of courses with an AI component. The researchers will then assist as faculty offer those courses to students, assess progress and help with the implementation of the projects that develop.

The researchers said they hope to challenge the notion that artificial intelligence research falls into the domain of STEM fields. 

“The humanities gets overlooked and underappreciated so often,” Carlson said. “We are doing important, meaningful research, just like our colleagues in STEM and other fields. This is a chance to use a humanities lens to examine contemporary problems and developments like artificial intelligence and also to get conversations going between fields that oftentimes don’t talk to one another as much as we should.”

Co-directors Carlson and Davidson will be joined by a team of mentors and fellows — two from data science fields and two from the humanities perspective — that will serve and assist as resources in the interdisciplinary conversations. The seminar and summer workshops will support the creation or redesign of 10 courses. They plan to invite off-campus experts to help facilitate the workshops, work with the faculty and support their projects.

“It’s really about expanding capacity at the University and in the humanities to investigate the implications of AI or to actually use AI in humanities courses, whether it’s for writing, creating art or creating projects through the use of AI,” Davidson said. “There are a lot of different possibilities and directions that we hope these courses take. If we have 10 of them, it’s really going to have a big impact on humanities education here at the University.”

Carlson and Davidson acknowledge that attitudes about AI tend to be either extremely optimistic or extremely skeptical but that the reality is somewhere in the middle.

“AI is such a simplistic term to describe a whole suite of different technologies and developments that folks are dealing with every day, whether they know it or not,” Carlson said, noting that discussions could focus on personal, social and economic impacts of AI use, as well as how it affects character and intellectual values. 

Davidson was inspired to focus on AI when he found an erroneous, AI-generated summary of one of his own articles.

“It was totally wrong,” he said. “I didn’t say those things, and it made me think about how somebody might look me up and find that summary of my article and get this false impression of me. That really highlighted that we need to build an understanding in students of the need to inquire deeper and to understand that you have to be able to evaluate AI’s accuracy and its reliability.”

Carlson and Davidson said the conversations need to consider AI’s drawbacks, as well. Using AI consumes large amounts of water and electricity resulting in greenhouse emissions. Data centers produce electronic waste that can contain mercury and lead. 

They also intend to follow legal cases and precedents surrounding the use of AI.

“That’s another aspect of AI and the ways that it represents people,” Carlson said. “Because it has a very real, material impact on people in communities. It’s not just a super computer in a room. It’s a network that has a bunch of different implications for a bunch of different people, ranging from jobs to familial relationships. That’s the value of the humanities — to ask these tough questions because it’s increasingly difficult to avoid all of it.”

Conversations, as they expand, will need to keep up with the pace of AI’s rapidly developing landscape.  

“There’s going to be a lot of people involved in this,” she said. “We put together an amazing team. We want it to be an open, honest and ethical conversation that brings in other folks and opens up further conversations across the College and the University at large.”"

Wednesday, January 8, 2025

New SUNY requirements to focus on civic skills and AI ethics; WHEC News10NBC, January 7, 2025

WHEC News10NBC; New SUNY requirements to focus on civic skills and AI ethics 

[Kip Currier: This is an intriguing development by the SUNY higher education system. It will be interesting to see how these efforts are assessed in future years (or even sooner, as I can imagine these general education requirements will likely catalyze spirited discussion about how to define -- and who gets to define -- "civic discourse", "healthy dialogues", and even "AI ethics").

Also, the end of the story notes that "AI assisted with the formatting of the story" and provides a link to learn more about how the news station uses AI. 

This is their policy, at present:

"News 10 NBC’S guidelines for using Artificial Intelligence

News10NBC uses artificial intelligence (A.I.) tools to help format some of our news stories from broadcast style to our digital print style. We do not use A.I. for research, content, reporting, or imaging. All news content that uses News10NBC’s A.I. resources is reviewed and approved in the News10NBC editorial process before publishing."

It's good practice, in terms of transparency and user awareness, that they do specify the current parameters of their use of AI.

A question to keep our eyes on:

  • How might this policy potentially change as AI evolves? Or if/when economic circumstances change, which we know occurs. For example, if news organizations like this one decide to potentially downsize, at some point, and have to do more reporting with fewer human reporters and news staff.]


News story:

"The State University of New York (SUNY) system is set to introduce new general education requirements for its students.

These updates will add a civic discourse component to the core competencies of the general education curriculum. 

According to SUNY, this new requirement aims to ensure that “students gain the skills necessary to participate in civic life and engage in healthy dialogues in order to secure the future of our democracy.”

“SUNY is committed to academic excellence, which includes a robust general education curriculum,” said SUNY Chancellor King. “We are proud that every SUNY student will be expected to demonstrate the knowledge and skills that advance respectful and reasoned discourse, and that we will help our students recognize and ethically use AI as they consider various information sources.”

Additionally, students will be required to learn how to recognize and ethically use artificial intelligence. The new requirements are expected to start in fall 2026."

Monday, January 6, 2025

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

 Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

 Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Thursday, November 28, 2024

Is using AI tools innovation or exploitation? 3 ways to think about the ethics; The Conversation, November 27, 2024

Dean and Professor, College of University Libraries and Learning Sciences, University of New Mexico, The Conversation; Is using AI tools innovation or exploitation? 3 ways to think about the ethics

"Across industries, workers encounter more immediate ethical questions about whether to use AI every day. In a trial by the U.K.-based law firm Ashurst, three AI systems dramatically sped up document review but missed subtle legal nuances that experienced lawyers would catch. Similarly, journalists must balance AI’s efficiency for summarizing background research with the rigor required by fact-checking standards.

These examples highlight the growing tension between innovation and ethics. What do AI users owe the creators whose work forms the backbone of those technologies? How do we navigate a world where AI challenges the meaning of creativity – and humans’ role in it?

As a dean overseeing university libraries, academic programs and the university press, I witness daily how students, staff and faculty grapple with generative AI. Looking at three different schools of ethics can help us go beyond gut reactions to address core questions about how to use AI tools with honesty and integrity."

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

 Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, September 6, 2024

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."