Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Wednesday, April 10, 2024

'Magical Overthinking' author says information overload can stoke irrational thoughts; NPR, Fresh Air, April 9, 2024

, NPR, Fresh Air; 'Magical Overthinking' author says information overload can stoke irrational thoughts

"How is it that we are living in the information age — and yet life seems to make less sense than ever? That's the question author and podcast host Amanda Montell set out to answer in her new book, The Age of Magical Overthinking. 

Montell says that our brains are overloaded with a constant stream of information that stokes our innate tendency to believe conspiracy theories and mysticism...

Montell, who co-hosts the podcast Sounds Like A Cult, says this cognitive bias is what allows misinformation and disinformation to spread so easily, particularly online. It also helps explain our tendency to make assumptions about celebrities we admire...

Montell says that in an age of overwhelming access to information, it's important to step away from electronic devices. "We are meant for a physical world. That's what our brains are wired for," she says. "These devices are addictive, but I find that my nervous system really thanks me when I'm able to do that.""

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Friday, July 7, 2023

The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.; Insider, July 7, 2023

, Insider; The Supreme Court makes almost all of its decisions on the 'shadow docket.' An author argues it should worry Americans more than luxury trips.

"The decisions made on the shadow docket are not inherently biased, Vladeck said, but the lack of transparency stokes legitimate concerns about the court's politicization and polarization, especially as the public's trust in the institution reaches an all-time low.

"Even judges and justices acting in good faith can leave the impression that their decisions are motivated by bias or bad faith — which is why judicial ethics standards, even those few that apply to the Supreme Court itself, worry about both bias and the appearance thereof," Vladeck writes.

The dangers posed by the shadow docket are more perilous than the wrongs of individual justices, Vladeck argues, because the shadow docket's ills are inherently institutional." 

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Tuesday, February 15, 2022

What internet outrage reveals about race and TikTok's algorithm; NPR, February 14, 2022

Jess Kung, NPR; What internet outrage reveals about race and TikTok's algorithm

"The more our lives become intertwined and caught up in tech and social media algorithms, the more it's worth trying to understand and unpack just how those algorithms work. Who becomes viral, and why? Who gets harassed, who gets defended, and what are the lasting repercussions? And how does the internet both obscure and exacerbate the racial and gender dynamics that already animate so much of our social interactions?"

Sunday, January 23, 2022

The Humanities Can't Save Big Tech From Itself; Wired, January 12, 2022

, Wired; The Humanities Can't Save Big Tech From Itself

 "I’ve been studying nontechnical workers in the tech and media industries for the past several years. Arguments to “bring in” sociocultural experts elide the truth that these roles and workers already exist in the tech industry and, in varied ways, always have. For example, many current UX researchers have advanced degrees in sociology, anthropology, and library and information sciences. And teachers and EDI (Equity, Diversity, and Inclusion) experts often occupy roles in tech HR departments.

Recently, however, the tech industry is exploring where nontechnical expertise might counter some of the social problems associated with their products. Increasingly, tech companies look to law and philosophy professors to help them through the legal and moral intricacies of platform governance, to activists and critical scholars to help protect marginalized users, and to other specialists to assist with platform challenges like algorithmic oppression, disinformation, community management, user wellness, and digital activism and revolutions. These data-driven industries are trying hard to augment their technical know-how and troves of data with social, cultural, and ethical expertise, or what I often refer to as “soft” data.

But you can add all of the soft data workers you want and little will change unless the industry values that kind of data and expertise. In fact, many academics, policy wonks, and other sociocultural experts in the AI and tech ethics space are noticing a disturbing trend of tech companies seeking their expertise and then disregarding it in favor of more technical work and workers...

Finally, though the librarian profession is often cited as one that might save Big Tech from its disinformation dilemmas, some in LIS (Library and Information Science) argue they collectively have a long way to go before they’re up to the task. Safiya Noble noted the profession’s (just over 83% white) “colorblind” ideology and sometimes troubling commitment to neutrality. This commitment, the book Knowledge Justice explains, leads to many librarians believing, “Since we serve everyone, we must allow materials, ideas, and values from everyone.” In other words, librarians often defend allowing racist, transphobic, and other harmful information to stand alongside other materials by saying they must entertain “all sides” and allow people to find their way to the “best” information. This is the exact same error platforms often make in allowing disinformation and abhorrent content to flourish online."

Friday, May 7, 2021

In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility; The New York Times, May 7, 2021

, The New York Times; In Covid Vaccine Data, L.G.B.T.Q. People Fear Invisibility

"Agencies rely on population data to make policy decisions and direct funding, and advocates say that failing to collect sexual orientation and gender identity data on Covid-19 vaccine uptake could obscure the real picture and prevent vaccine distribution decisions and funds from positively impacting this population.

When it comes to Covid-19 vaccine distribution, “how can you design interventions and know where to target your resources if you don’t know where you’ve been?” said Dr. Ojikutu.

A February study showed that L.G.B.T.Q. people with high medical mistrust and concern about experiencing stigma or discrimination were least likely to say they would accept a Covid-19 vaccine.

“The reason we need to do data-driven, culturally responsive outreach is that medical mistrust — and along with that, vaccine hesitancy — among L.G.B.T.Q. people is rooted in the stigma and discrimination that this community has experienced over time,” said Alex Keuroghlian, a psychiatrist and director of the National LGBTQIA+ Health Education Center and the Massachusetts General Hospital Psychiatry Gender Identity Program."

Saturday, July 11, 2020

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"

Thursday, January 23, 2020

Five Ways Companies Can Adopt Ethical AI; Forbes, January 23, 2020

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum, Forbes; Five Ways Companies Can Adopt Ethical AI

"In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good...

Here are five lessons for the ethical use of AI."

Tuesday, November 26, 2019

NYC wants a chief algorithm officer to counter bias, build transparency; Ars Technica, November 25, 2019

Kate Cox, Ars Technica; NYC wants a chief algorithm officer to counter bias, build transparency

"It takes a lot of automation to make the nation's largest city run, but it's easy for that kind of automation to perpetuate existing problems and fall unevenly on the residents it's supposed to serve. So to mitigate the harms and ideally increase the benefits, New York City has created a high-level city government position essentially to manage algorithms."

Wednesday, November 6, 2019

Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices; The Washington Post, November 6, 2019

Drew Harwell, The Washington Post; Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices

"The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices, saying the company’s use of unproven artificial intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers."

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Elisa Celis and the fight for fairness in artificial intelligence; Yale News, November 6, 2019

Jim Shelton, Yale News; Elisa Celis and the fight for fairness in artificial intelligence

"What can you tell us about the new undergraduate course you’re teaching at Yale?

It’s called “Data Science Ethics.” I came in with an idea of what I wanted to do, but I also wanted to incorporate a lot of feedback from students. The first week was spent asking: “What is normative ethics? How do we even go about thinking in terms of ethical decisions in this context?” With that foundation, we began talking about different areas where ethical questions come out, throughout the entire data science pipeline. Everything from how you collect data to the algorithms themselves and how they end up encoding these biases, and how the results of biased algorithms directly affect people. The goal is to introduce students to all the things they should have in their mind when talking about ethics in the technical sphere.

The class doesn’t require coding or technical background, because that allows students from other departments to participate. We have students from anthropology, sociology, and economics, and other departments, which broadens the discussion. That’s very valuable when grappling with these inherently interdisciplinary problems."

Wednesday, October 23, 2019

A face-scanning algorithm increasingly decides whether you deserve the job; The Washington Post, October 22, 2019

Drew Harwell, The Washington Post; A face-scanning algorithm increasingly decides whether you deserve the job 

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

"“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York...

Loren Larsen, HireVue’s chief technology officer, said that such criticism is uninformed and that “most AI researchers have a limited understanding” of the psychology behind how workers think and behave...

“People are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are,” he told The Washington Post. “Algorithms eliminate most of that in a way that hasn’t been possible before.”...

HireVue’s growth, however, is running into some regulatory snags. In August, Illinois Gov. J.B. Pritzker (D) signed a first-in-the-nation law that will force employers to tell job applicants how their AI-hiring system works and get their consent before running them through the test. The measure, which HireVue said it supports, will take effect Jan. 1."

Trump housing plan would make bias by algorithm 'nearly impossible to fight'; The Guardian, October 23, 2019

Kari Paul, The Guardian; Trump housing plan would make bias by algorithm 'nearly impossible to fight'

"Under the Department of Housing and Urban Development’s (HUD) new rules, businesses would be shielded from liability when their algorithms are accused of bias through three different loopholes:
  • When the algorithm in question is vetted by a “neutral third party”.
  • When the algorithm itself was created by a third party.
  • If an algorithm used did not use race or a proxy for it in the computer model.
In the letter, groups in opposition to the change noted many pieces of data can be proxies for race – discriminating by a zip code, for example, can enable a racial bias. The rule would give “unprecedented deference” to mortgage lenders, landlords, banks, insurance companies, and others in the housing industry, the letter said."

Thursday, April 18, 2019

'Disastrous' lack of diversity in AI industry perpetuates bias, study finds; The Guardian, April 16, 2019

Kari Paul, The Guardian;

'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

"Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said...

The report released on Tuesday cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually. The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels."

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

Monday, December 17, 2018

Digital Ethics: Data is the new forklift; Internet of Business, December 17, 2018

Joanna Goodman, Internet of Business; Digital Ethics: Data is the new forklift

"Joanna Goodman reports from last week’s Digital ethics summit.

Governments, national and international institutions and businesses must join forces to make sure that AI and emerging technology are deployed successfully and responsibly. This was the central message from TechUK’s second Digital Ethics Summit in London.
Antony Walker, TechUK’s deputy CEO set out the purpose of the summit: “How to deliver on the promise of tech that can provide benefits for people and society in a way that minimises harm”.
This sentiment was echoed throughout the day. Kate Rosenshine, data architect at Microsoft reminded us that data is not unbiased and inclusivity and fairness are critical to data-driven decision-making. She quoted Cathy Bessant, CTO of Bank of America:
Technologists cannot lose sight of how algorithms affect real people."