Showing posts with label responsible AI. Show all posts
Showing posts with label responsible AI. Show all posts

Saturday, November 29, 2025

Fordham Offers Certificate Focused on AI Ethics; Fordham Now, November 17, 2025

, Fordham Now; Fordham Offers Certificate Focused on AI Ethics

"As new technologies like artificial intelligence become increasingly embedded in everyday life, questions about how to use them responsibly have grown more urgent. A new advanced certificate program at Fordham aims to help professionals engage with those questions and build expertise as ethical decision-makers in an evolving technological landscape. 

The Advanced Certificate in Ethics and Emerging Technologies is scheduled to launch in August 2026, with applications due April 1. The 12-credit program provides students with a foundation for understanding not only how technologies such as AI work, but also how to evaluate their social and moral implications to make informed decisions about their use. 

A Long History of Ethical Education

The program’s development was guided by faculty in Fordham’s Center for Ethics Education, which has been a part of the University community for roughly three decades. According to Megan Bogia, associate director for academic programs and strategic initiatives at the center, the certificate program was developed in response to a growing need for ethical literacy among professionals working with new technologies—whether that means weighing questions of bias in AI-driven hiring tools, navigating privacy concerns in health data, or understanding the societal effects of automation. 

“As technologies rapidly advance and permeate more deeply into our daily lives, it’s important that we simultaneously build up the fluency to interrogate them,” said Bogia. “Not just so that we can advance a more just society, but also so we can be internally confident in navigating an increasingly complicated world.”

Flexible Options for a Variety of Fields

Students will complete courses that examine ethical issues related to technology, as well as classes that provide technical grounding in the systems behind it. One required course, currently under development by the Department of Computer and Information Science, will cover artificial intelligence for non-specialists, Bogia said, helping students understand “all of the machinations of LLMs—large language models—so they can be fully informed interlocutors with the models.”


Other courses will explore questions of moral responsibility and social impact. Electives such as “Algorithmic Bias” and “Technology and Human Development” will allow students to dig more deeply into specialized areas. 


Bogia said the program—which can be completed full-time or part-time, over the course of one or two years—was designed to be flexible and relevant for students across a wide range of fields and career stages. It may appeal to professionals working in areas such as business, education, human resources, health care, and law, as well as those in technology-focused fields like data science and cybersecurity. 


“These ethical questions are everywhere,” Bogia said. “We’ll have learning environments that meet students where they’re at and allow them to develop fluency in a way that’s most useful for them.”


She added that Fordham is an especially fitting place to pursue this kind of inquiry.

“As a Jesuit institution, Fordham is well-positioned to be concerned and compassionate in the face of hard problems,” said Bogia. 


To learn more, visit the program’s webpage."

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

 Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Wednesday, May 1, 2024

Microsoft’s “responsible AI” chief worries about the open web; The Washington Post, May 1, 2024

, The Washington Post ; Microsoft’s “responsible AI” chief worries about the open web

"As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is “part of the core bargain of search,” Natasha Crampton said in an interview Monday.

Crampton, Microsoft’s chief Responsible AI officer, spoke with The Technology 202 ahead of Microsoft’s release today of its first “Responsible AI Transparency Report.” The 39-page report, which the company is billing as the first of its kind from a major tech firm, details how Microsoft plans to keep its rapidly expanding stable of AI tools from wreaking havoc. 

It makes the case that the company has closely integrated Crampton’s Responsible AI team into its development of new AI products. It also details the progress the company has made toward meeting some of the Voluntary AI Commitments that Microsoft and other tech giants signed on to in September as part of the Biden administration’s push to regulate artificial intelligence. Those include developing safety evaluation systems for its AI cloud tools, expanding its internal AI “red teams,” and allowing users to mark images as AI-generated."

Sunday, March 31, 2024

Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024

 Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."

Saturday, February 25, 2023

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot; The New York Times, February 23, 2023

  The New York Times; History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

"Microsoft’s “responsible A.I.” program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles. (Though the company says it is still adhering to all six of them.)"

Thursday, October 7, 2021

AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face; Emerging Tech Brew, October 4, 2021

Hayden Field, Emerging Tech Brew ; AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face

"Hugging Face wants to bring these powerful tools to more people. Its mission: Help companies build, train, and deploy AI models—specifically natural language processing (NLP) systems—via its open-source tools, like Transformers and Datasets. It also offers pretrained models available for download and customization.

So what does it mean to play a part in “democratizing” these powerful NLP tools? We chatted with Mitchell about the split from Google, her plans for her new role, and her near-future predictions for responsible AI."