Showing posts with label AI bias. Show all posts
Showing posts with label AI bias. Show all posts

Tuesday, June 3, 2025

5 ethical questions about artificial intelligence; Britannica Money, May 2025

 Written byFact-checked byBritannica Money; 5 ethical questions about artificial intelligence

"Are you wondering about the ethical implications of artificial intelligence? You’re not alone. AI is an innovative, powerful tool that many fear could produce significant consequences—some positive, some negative, and some downright dangerous.

Ethical concerns about an emerging technology aren’t new, but with the rise of generative AI and rapidly increasing user adoption, the conversation is taking on new urgency. Is AI fair? Does it protect our privacy? Who is accountable when AI makes a mistake—and is AI the ultimate job killer? Enterprises, individuals, and regulators are grappling with these important questions.


Let’s explore the major ethical concerns surrounding artificial intelligence and how AI designers can potentially address these problems."

Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond; The Federalist Society, June 3, 2025 12 PM EDT

The Federalist Society; Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

"The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI tool, to the challenges related to professional responsibility as traditionally understood, to the risks large language learning models pose to client confidentiality. Some contend that AI is a must-use, as it opens the door to faster, more efficient legal research that could equip lawyers to serve their clients more effectively. Others reject the use of AI, arguing that the risks of use and the work required to check the output it gives exceed its potential benefit.

Join us for a FedSoc Forum exploring the ethical and legal implications of artificial intelligence in the practice of law.

Featuring: 

  • Laurin H. Mills, Member, Werther & Mills, LLC
  • Philip A. Sechler, Senior Counsel, Alliance Defending Freedom
  • Prof. Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA School of Law; Thomas M. Siebel Senior Fellow, Hoover Institution, Stanford University
  • (Moderator) Hon. Brantley Starr, District Judge, United States District Court for the Northern District of Texas"

Thursday, May 15, 2025

Republicans propose prohibiting US states from regulating AI for 10 years; The Guardian, May 14, 2025

, The Guardian; Republicans propose prohibiting US states from regulating AI for 10 years

"Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years.

A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” unless the purpose of the law is to “remove legal impediments to, or facilitate the deployment or operation of” these systems...

The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits.

Many of these automated decision-making systems have recently come under fire. The deregulatory proposal comes on the heels of a lawsuit filed by several state attorneys general against the property management software RealPage, which the lawsuit alleges colluded with landlords to raise rents based on the company’s algorithmic recommendations. Another company, SafeRent, recently settled a class-action lawsuit filed by Black and Hispanic renters who say they were denied apartments based on an opaque score the company gave them."

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Wednesday, October 2, 2024

Scottish university to host AI ethics conference; Holyrood, October 2, 2024

 Holyrood; Scottish university to host AI ethics conference

"The University of Glasgow will gather leading figures from the artificial intelligence (AI) community for a three-day conference this week in a bid to address the ethical challenges posed by the technology.

Starting tomorrow, the Lovelace-Hodgkin Symposium, will see academics, researchers, and policymakers discuss how to make AI a tool for “positive change” across higher education.

The event will inform the development of a new online course on AI ethics, which will boost ethical literacy "across higher education and beyond”, the university said...

During the symposium, speakers from the university’s research and student communities will present and participate in workshops alongside representatives to build the new course.

The first day of the event will examine the current state of AI, focusing on higher education and the use of AI in research and teaching.

On Thursday, the conference will discuss how to tackle inequality and bias in AI, featuring discussions on AI and race, gender, the environment, children’s rights, and how AI is communicated and consumed.

The final day will involve participants creating an ethical framework for inclusive AI, where they will outline a series of actionable steps and priorities for academic institutions, which will be used to underpin the online course."

Friday, February 16, 2018

Congress is worried about AI bias and diversity; Quartz, February 15, 2018

Dave Gershgorn, Quartz; Congress is worried about AI bias and diversity

"Recent research from the MIT Media Lab maintains that facial recognition is still significantly worse for people of color, however.
“This is not a small thing,” Isbell said of his experience. “It can be quite subtle, and you can go years and years and decades without even understanding you are injecting these kinds of biases, just in the questions that you’re asking, the data you’re given, and the problems you’re trying to solve.”
In his opening statement, Isbell talked about biased data in artificial intelligence systems today, including predictive policing and biased algorithms used in predicting recidivism rates.
“It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again, and in aggregate leads to even more policing of the same areas, creating a feedback loop,” he said. “One can imagine similar issues with determining it for a job, or credit-worthiness, or even face recognition and automated driving.”"