Showing posts with label humanities lens. Show all posts
Showing posts with label humanities lens. Show all posts

Wednesday, September 4, 2024

NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence; National Endowment for the Humanities (NEH), August 27, 2024

Press Release, National Endowment for the Humanities (NEH); NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence

"The National Endowment for the Humanities (NEH) today announced grant awards totaling $2.72 million for five colleges and universities to create new humanities-led research centers that will serve as hubs for interdisciplinary collaborative research on the human and social impact of artificial intelligence (AI) technologies.

As part of NEH’s third and final round of grant awards for FY2024, the Endowment made its inaugural awards under the new Humanities Research Centers on Artificial Intelligence program, which aims to foster a more holistic understanding of AI in the modern world by creating scholarship and learning centers across the country that spearhead research exploring the societal, ethical, and legal implications of AI. 

Institutions in California, New York, North Carolina, Oklahoma, and Virginia were awarded NEH grants to establish the first AI research centers and pilot two or more collaborative research projects that examine AI through a multidisciplinary humanities lens. 

The new Humanities Research Centers on Artificial Intelligence grant program is part of NEH’s agencywide Humanities Perspectives on Artificial Intelligence initiative, which supports humanities projects that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties. The initiative responds to President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which establishes new standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights."

Sunday, March 31, 2024

Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024

 Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence

"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...

“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”

There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.

In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."