Showing posts with label AI bias. Show all posts
Showing posts with label AI bias. Show all posts

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Wednesday, October 2, 2024

Scottish university to host AI ethics conference; Holyrood, October 2, 2024

 Holyrood; Scottish university to host AI ethics conference

"The University of Glasgow will gather leading figures from the artificial intelligence (AI) community for a three-day conference this week in a bid to address the ethical challenges posed by the technology.

Starting tomorrow, the Lovelace-Hodgkin Symposium, will see academics, researchers, and policymakers discuss how to make AI a tool for “positive change” across higher education.

The event will inform the development of a new online course on AI ethics, which will boost ethical literacy "across higher education and beyond”, the university said...

During the symposium, speakers from the university’s research and student communities will present and participate in workshops alongside representatives to build the new course.

The first day of the event will examine the current state of AI, focusing on higher education and the use of AI in research and teaching.

On Thursday, the conference will discuss how to tackle inequality and bias in AI, featuring discussions on AI and race, gender, the environment, children’s rights, and how AI is communicated and consumed.

The final day will involve participants creating an ethical framework for inclusive AI, where they will outline a series of actionable steps and priorities for academic institutions, which will be used to underpin the online course."

Friday, February 16, 2018

Congress is worried about AI bias and diversity; Quartz, February 15, 2018

Dave Gershgorn, Quartz; Congress is worried about AI bias and diversity

"Recent research from the MIT Media Lab maintains that facial recognition is still significantly worse for people of color, however.
“This is not a small thing,” Isbell said of his experience. “It can be quite subtle, and you can go years and years and decades without even understanding you are injecting these kinds of biases, just in the questions that you’re asking, the data you’re given, and the problems you’re trying to solve.”
In his opening statement, Isbell talked about biased data in artificial intelligence systems today, including predictive policing and biased algorithms used in predicting recidivism rates.
“It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again, and in aggregate leads to even more policing of the same areas, creating a feedback loop,” he said. “One can imagine similar issues with determining it for a job, or credit-worthiness, or even face recognition and automated driving.”"