Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Tuesday, July 16, 2024

Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems; American Bar Association (ABA), June 3, 2024

Gary Rhoades , American Bar Association (ABA); Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems

"The Civil Rights Act of 1968, also known as the Fair Housing Act (FHA), banned housing discrimination nationwide on the basis of race, religion, national origin, and color. One key finding that persuaded Dr. Martin Luther King Jr., President Lyndon Johnson, and others to fight for years for the passage of this landmark law confirmed that many Americans were being denied rental housing because of their race. Black families were especially impacted by the discriminatory rejections. They were forced to move on and spend more time and money to find housing and often had to settle for substandard housing in unsafe neighborhoods and poor school districts to avoid homelessness.

April 2024 marked the 56th year of the FHA’s attempt to end such unfair treatment. Despite the law’s broadly stated protections, its numerous state and local counterparts, and decades of enforcement, landlords’ use of high-tech algorithms for tenant screening threatens to erase the progress made. While employing algorithms to mine data such as criminal records, credit reports, and civil court records to make predictions about prospective tenants might partially remove the fallible human element, old and new biases, especially regarding race and source of income, still plague the screening results."

Tuesday, July 2, 2024

Are AI-powered church services coming to a pew near you?; Scripps News, May 10, 2024

 

""Depending upon what data sets it's using, we get an intense amount of bias within AI right now," Callaway told Scripps News. "And it reflects, shock and awe, the same bias that we have as humans. And so having someone that is actually a kind of wise guide or mentor to help you discern how to even interpret, understand the results that AI is giving you is really important."

But Callaway says there's good that can come from AI, like translating the Bible into various languages...

Rabbi Geoff Mitelman, who helped found the studies at Temple B'Nai Or through his organization Sinai and Synapses, agrees, saying AI can be an aid in study...

However, there are concerns across religions about the interpretation of such texts, bias and misinformation.

"The spread of misinformation and how easy it is to create and then spread misinformation, whether that's using something like Dall-E or ChatGPT or videos and also algorithms that will spread misinformation — because at least for hundreds of thousands of years it was better for humans to trust than to not trust, right?" said Mitelman.

That cautious view of AI and religion seems to translate across practices, a poll from the Christian research group Barna shows.
Over half of Christians, 52%, said they'd be disappointed if they found out AI was used in their church."

Wednesday, April 10, 2024

'Magical Overthinking' author says information overload can stoke irrational thoughts; NPR, Fresh Air, April 9, 2024

, NPR, Fresh Air; 'Magical Overthinking' author says information overload can stoke irrational thoughts

"How is it that we are living in the information age — and yet life seems to make less sense than ever? That's the question author and podcast host Amanda Montell set out to answer in her new book, The Age of Magical Overthinking. 

Montell says that our brains are overloaded with a constant stream of information that stokes our innate tendency to believe conspiracy theories and mysticism...

Montell, who co-hosts the podcast Sounds Like A Cult, says this cognitive bias is what allows misinformation and disinformation to spread so easily, particularly online. It also helps explain our tendency to make assumptions about celebrities we admire...

Montell says that in an age of overwhelming access to information, it's important to step away from electronic devices. "We are meant for a physical world. That's what our brains are wired for," she says. "These devices are addictive, but I find that my nervous system really thanks me when I'm able to do that.""

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Friday, July 7, 2023

The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.; Insider, July 7, 2023

, Insider; The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.

"The decisions made on the shadow docket are not inherently biased, Vladeck said, but the lack of transparency stokes legitimate concerns about the court's politicization and polarization, especially as the public's trust in the institution reaches an all-time low.

"Even judges and justices acting in good faith can leave the impression that their decisions are motivated by bias or bad faith — which is why judicial ethics standards, even those few that apply to the Supreme Court itself, worry about both bias and the appearance thereof," Vladeck writes.

The dangers posed by the shadow docket are more perilous than the wrongs of individual justices, Vladeck argues, because the shadow docket's ills are inherently institutional." 

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Tuesday, February 15, 2022

What internet outrage reveals about race and TikTok's algorithm; NPR, February 14, 2022

Jess Kung, NPR; What internet outrage reveals about race and TikTok's algorithm

"The more our lives become intertwined and caught up in tech and social media algorithms, the more it's worth trying to understand and unpack just how those algorithms work. Who becomes viral, and why? Who gets harassed, who gets defended, and what are the lasting repercussions? And how does the internet both obscure and exacerbate the racial and gender dynamics that already animate so much of our social interactions?"

Sunday, January 23, 2022

The Humanities Can't Save Big Tech From Itself; Wired, January 12, 2022

, Wired; The Humanities Can't Save Big Tech From Itself

 "I’ve been studying nontechnical workers in the tech and media industries for the past several years. Arguments to “bring in” sociocultural experts elide the truth that these roles and workers already exist in the tech industry and, in varied ways, always have. For example, many current UX researchers have advanced degrees in sociology, anthropology, and library and information sciences. And teachers and EDI (Equity, Diversity, and Inclusion) experts often occupy roles in tech HR departments.

Recently, however, the tech industry is exploring where nontechnical expertise might counter some of the social problems associated with their products. Increasingly, tech companies look to law and philosophy professors to help them through the legal and moral intricacies of platform governance, to activists and critical scholars to help protect marginalized users, and to other specialists to assist with platform challenges like algorithmic oppression, disinformation, community management, user wellness, and digital activism and revolutions. These data-driven industries are trying hard to augment their technical know-how and troves of data with social, cultural, and ethical expertise, or what I often refer to as “soft” data.

But you can add all of the soft data workers you want and little will change unless the industry values that kind of data and expertise. In fact, many academics, policy wonks, and other sociocultural experts in the AI and tech ethics space are noticing a disturbing trend of tech companies seeking their expertise and then disregarding it in favor of more technical work and workers...

Finally, though the librarian profession is often cited as one that might save Big Tech from its disinformation dilemmas, some in LIS (Library and Information Science) argue they collectively have a long way to go before they’re up to the task. Safiya Noble noted the profession’s (just over 83% white) “colorblind” ideology and sometimes troubling commitment to neutrality. This commitment, the book Knowledge Justice explains, leads to many librarians believing, “Since we serve everyone, we must allow materials, ideas, and values from everyone.” In other words, librarians often defend allowing racist, transphobic, and other harmful information to stand alongside other materials by saying they must entertain “all sides” and allow people to find their way to the “best” information. This is the exact same error platforms often make in allowing disinformation and abhorrent content to flourish online."

Friday, May 7, 2021

In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility; The New York Times, May 7, 2021

, The New York Times; In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility

"Agencies rely on population data to make policy decisions and direct funding, and advocates say that failing to collect sexual orientation and gender identity data on Covid-19 vaccine uptake could obscure the real picture and prevent vaccine distribution decisions and funds from positively impacting this population.

When it comes to Covid-19 vaccine distribution, “how can you design interventions and know where to target your resources if you don’t know where you’ve been?” said Dr. Ojikutu.

A February study showed that L.G.B.T.Q. people with high medical mistrust and concern about experiencing stigma or discrimination were least likely to say they would accept a Covid-19 vaccine.

“The reason we need to do data-driven, culturally responsive outreach is that medical mistrust — and along with that, vaccine hesitancy — among L.G.B.T.Q. people is rooted in the stigma and discrimination that this community has experienced over time,” said Alex Keuroghlian, a psychiatrist and director of the National LGBTQIA+ Health Education Center and the Massachusetts General Hospital Psychiatry Gender Identity Program."

Saturday, July 11, 2020

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"

Thursday, January 23, 2020

Five Ways Companies Can Adopt Ethical AI; Forbes, January 23, 2020

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum, Forbes; Five Ways Companies Can Adopt Ethical AI

"In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good...

Here are five lessons for the ethical use of AI."

Tuesday, November 26, 2019

NYC wants a chief algorithm officer to counter bias, build transparency; Ars Technica, November 25, 2019

Kate Cox, Ars Technica; NYC wants a chief algorithm officer to counter bias, build transparency

"It takes a lot of automation to make the nation's largest city run, but it's easy for that kind of automation to perpetuate existing problems and fall unevenly on the residents it's supposed to serve. So to mitigate the harms and ideally increase the benefits, New York City has created a high-level city government position essentially to manage algorithms."

Wednesday, November 6, 2019

Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices; The Washington Post, November 6, 2019

Drew Harwell, The Washington Post; Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices

"The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices, saying the company’s use of unproven artificial intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers."

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Elisa Celis and the fight for fairness in artificial intelligence; Yale News, November 6, 2019

Jim Shelton, Yale News; Elisa Celis and the fight for fairness in artificial intelligence

"What can you tell us about the new undergraduate course you’re teaching at Yale?

It’s called “Data Science Ethics.” I came in with an idea of what I wanted to do, but I also wanted to incorporate a lot of feedback from students. The first week was spent asking: “What is normative ethics? How do we even go about thinking in terms of ethical decisions in this context?” With that foundation, we began talking about different areas where ethical questions come out, throughout the entire data science pipeline. Everything from how you collect data to the algorithms themselves and how they end up encoding these biases, and how the results of biased algorithms directly affect people. The goal is to introduce students to all the things they should have in their mind when talking about ethics in the technical sphere.

The class doesn’t require coding or technical background, because that allows students from other departments to participate. We have students from anthropology, sociology, and economics, and other departments, which broadens the discussion. That’s very valuable when grappling with these inherently interdisciplinary problems."

Wednesday, October 23, 2019

A face-scanning algorithm increasingly decides whether you deserve the job; The Washington Post, October 22, 2019

Drew Harwell, The Washington Post; A face-scanning algorithm increasingly decides whether you deserve the job 

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

"“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York...

Loren Larsen, HireVue’s chief technology officer, said that such criticism is uninformed and that “most AI researchers have a limited understanding” of the psychology behind how workers think and behave...

“People are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are,” he told The Washington Post. “Algorithms eliminate most of that in a way that hasn’t been possible before.”...

HireVue’s growth, however, is running into some regulatory snags. In August, Illinois Gov. J.B. Pritzker (D) signed a first-in-the-nation law that will force employers to tell job applicants how their AI-hiring system works and get their consent before running them through the test. The measure, which HireVue said it supports, will take effect Jan. 1."