Showing posts with label AI Ethics Lab. Show all posts
Showing posts with label AI Ethics Lab. Show all posts

Tuesday, May 6, 2025

AI Ethics Lab Explores Impacts of the Technology’s Rapid Growth; Rutgers-Camden, March 17, 2025

Christina Lynn, Rutgers-Camden; AI Ethics Lab Explores Impacts of the Technology’s Rapid Growth

"A global research initiative has emerged at Rutgers–Camden to tackle the pressing ethical challenges and opportunities posed by the rapid growth of artificial intelligence, or AI. 

Launched last fall, the AI Ethics Lab, housed in the Digital Studies Center under the Department of English and Communication, examines artificial intelligence’s ethical and legal implications across the AI life cycle, from what kind of data is collected to the monitoring of this emerging technology. 

Leading the charge is Lecturer of Philosophy and Religion Nathan C. Walker, a First Amendment and human-rights expert with an international AI research pedigree and experience working with one of the world’s leading AI platforms. 

“Studying civil liberties and human rights uniquely positions me to identify where AI can go wrong,” Walker said. “If we go back to the basics—our core principles and our core values—we can actually remind humanity that eight decades of human-rights law have prepared us for this moment."

Thursday, August 1, 2024

What do corporations need to ethically implement AI? Turns out, a philosopher; Northeastern Global News, July 26, 2024

, Northeastern Global News ; What do corporations need to ethically implement AI? Turns out, a philosopher

"As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.

Canca has also worked with organizations like the World Economic Forum and Interpol.

But what does “ethical” mean when it comes to AI? That, Canca says, is exactly the point.

“A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’” 

But, she notes, there are “different definitions of justice, distributive justice, different definitions of fairness. They conflict with each other. It is a big theoretical question. How do we define fairness?”

"Saying that ‘We optimized this for fairness,’ means absolutely nothing until you have a working,  proper definition” — which shifts from project to project, she also notes.

Now, Canca has been named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI — using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity,” the organization wrote in its announcement."